diff --git a/Article/conclusion.tex b/Article/conclusion.tex index 0b0846908cf370bc5c6201a3f1bd0a181b471b08..c22dd806b7ff5a082f6af8c1e8bff1e2aa535142 100755 --- a/Article/conclusion.tex +++ b/Article/conclusion.tex @@ -6,27 +6,8 @@ This paper introduced a new straight line detector based on a local analysis of the image gradient and on the use of blurred segments to embed an estimation of the line thickness. It relies on directional scans of the input image around maximal values of the -gradient magnitude, and on -%that have previously been presented in \cite{KerautretEven09}. -%Despite of good performances achieved, the former approach suffers of two -%major drawbacks: the inaccurate estimation of the blurred segment width -%and orientation, and the lack of guarantee that it is completely detected. -%These limitations were solved through the integration of two new concepts: -%adaptive directional scans that continuously adjust the scan strip -%to the detected blurred segment direction; -%The main limitations of the former approach were solved through -the integration of two new concepts: -adaptive directional scans -%that continuously adjust the scan strip to the detected edge direction, -and control of assigned thickness. -%based on the observation of the blurred segment growth. -%Experiments on synthetic images show the better performance -%and especially the more accurate estimation of the line thickness brought by -%these concepts. -%Such a result can not be compared to other approaches since they do not -%provide any thickness estimation. -%Moreover the performance of the unsupervised mode gives better coverage of -%the detected edges and produces quite comparable execution time. +gradient magnitude, and on the integration of two new concepts: +adaptive directional scans and control of assigned thickness. Comparisons to other recent line detectors show competitive global performance in terms of execution time and mean length of output lines, while experiments on synthetic images indicate a better estimation of @@ -36,26 +17,8 @@ A residual weakness of the approach is the sensitivity to the initial conditions. In supervised context, the user can select a favourable area where the awaited edge is dominant. -%This task is made quite easier, thanks to the stabilization produced by -%the duplication of the initial detection. But in unsupervised context, gradient perturbations in the early stage of the line expansion, mostly due to the presence of close edges, can -% deeply affect the result. -In future works, we intend to provide solutions -% to this drawback -by scoring the detection result on the basis of a characterization of the -local context. -% -%Then experimental validation of the consistency of the estimated width and -%orientation values on real situations are planned in different application -%fields. -%In particular, straight edges are rich visual features for 3D scene -%reconstruction from 2D images. -%The preimage of the detected blurred segments, -%i.e. the space of geometric entities which numerization matches this -%blurred segment, may be used to compute some confidence level in the 3D -%interpretations delivered, as a promising extension of former works -%on discrete epipolar geometry \cite{NatsumiAl08}. - -%\section*{Acknowledgements} +In future works, we intend to provide solutions by scoring the detection +result on the basis of a characterization of the local context. diff --git a/Article/expe.tex b/Article/expe.tex index 7cccdd94e3ea9b5bc27a451ed7d4a5adc7b519b9..4bddb4fa74fe4d42642f2109af7c1998b03e395d 100755 --- a/Article/expe.tex +++ b/Article/expe.tex @@ -2,27 +2,6 @@ \label{sec:expe} -%The main goal of this work is to detect straight segments enriched with a -%quality measure through the associated width parameter. -%In lack of available reference tool (line detector and ground truth data) -%dealing with the thickness parameter, the evaluation stage first aims -%at quantifying the benefits of the new detector compared to the previous -%one in unsupervised context on synthetic data considered as a ground truth. -%Then comparisons are made with a well established recent detector -%\cite{LuAl15} in order to check that global performance (processing time, -%ground truth covering, detected lines count and mean length) are not -%degraded. - -%The process flow of the former method (initial detection followed by two -%refinement steps) is integrated as an option into the code of the new -%detector, so that both methods rely on the same optimized basic routines. -%During all these experiments, only the blurred segment size and its -%orientation compared to the initial stroke are tested at the end of -%the initial detection, and only the segment size is tested at the end -%of the fine tracking stage. -%All other tests, sparsity or fragmentation, are disabled. -%The segment minimal size is set to 5 pixels, except where precised. - In the experimental stage, the proposed approach is validated through comparisons with other recent line detectors: LSD \cite{GioiAl10}, ED-Lines \cite{AkinlarTopal12} and CannyLines \cite{LuAl15}. @@ -44,53 +23,17 @@ is set to 15 pixels. At first, the performance of both versions of the detector (with and without the concepts) is tested on a set of 1000 synthesized images containing 10 randomly placed input segments with random thickness between 2 and 5 pixels. -%Such controlled images can be considered as ground truths. The initial assigned thickness $\varepsilon_0$ is set to 7 pixels to detect all the lines in unsupervised mode. The absolute value of the difference of each found segment to its matched input segment is measured. -%On these synthetic images, the numerical error on the gradient extraction -%biases the line width measures. This bias was first estimated using 1000 -%images containing only one input segment (no possible interaction) -%and the found value (1.4 pixel) was taken into account in the test. Results in \RefTab{tab:synth} show that the new concepts afford improved thickness and angle measurements, better precision with a smaller amount of false detections, and that they help to find most of the input segments. -%\RefTab{tab:synth} shows -%slightly better thickness and angle measurements for the new detector. -%The new detector shows more precise, with a smaller amount of false -%detections and succeeds in finding most of the input segments. Other experiments, also available at the {\it GitHub} repository, confirm these improvements. -% than the previous one. - -%\begin{figure}[h] -%\center -% \begin{tabular}{ -% c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c} -% \includegraphics[width=0.19\textwidth]{Fig_synth/statsExample.png} & -% \includegraphics[width=0.19\textwidth]{Fig_synth/statsoldPoints.png} & -% \includegraphics[width=0.19\textwidth]{Fig_synth/statsoldBounds.png} & -% \includegraphics[width=0.19\textwidth]{Fig_synth/statsnewPoints.png} & -% \includegraphics[width=0.19\textwidth]{Fig_synth/statsnewBounds.png} -% \begin{picture}(1,1) -% \put(-310,0){a)} -% \put(-240,0){b)} -% \put(-170,0){c)} -% \put(-100,0){d)} -% \put(-30,0){e)} -% \end{picture} -% \end{tabular} -% \caption{Evaluation on synthesized images: -% a) one of the test images, -% b) output blurred segments from the old detector and -% c) their enclosing digital segments, -% d) output blurred segments from the new detector and -% e) their enclosing digital segments.} -% \label{fig:synth} -%\end{figure} \begin{table} \centering \input{Fig_synth/statsTable} diff --git a/Article/intro.tex b/Article/intro.tex index cf2d41c31958a8fbf965c72993c008267ed2e222..8618407e69cb54feaae9762fede67677c9060c7c 100755 --- a/Article/intro.tex +++ b/Article/intro.tex @@ -4,13 +4,8 @@ Straight lines are commonly used as visual features for many image analysis processes. -%For instance in computer vision, they are used to estimate the vanishing -%points associated to main directions of the 3D world, thus allowing to compute camera -%orientation. They are also used to detect structured features for -%3D reconstruction. In particular in man-made environments, they are a suitable alternative to points for camera orientation \cite{DenisAl08,XuAl17}, 3D reconstruction -%\cite{HoferAl17,ParkAl15,ZaheerAl18} \cite{ParkAl15,ZaheerAl18} or also simultaneous localization and mapping \cite{HiroseSaito12,RuifangAl17}. @@ -30,9 +25,6 @@ It could also be a base for uncertainty propagation within 3D interpretation tools, in order to dispose of complementary measures to reprojection errors for local accuracy evaluation. -%Some information may sometimes be drawn from their specific context, -%for example through an analysis of the peak in a Hough transform accumulator. - In digital geometry, new mathematical definitions of classical geometric objects, such as lines or circles, have been developed to better fit to the discrete nature of most of today's data to process. @@ -43,18 +35,12 @@ Efficient algorithms have already been designed to recognize these digital objects in binary images \cite{DebledAl06}. Blurred segments seem well suited to reflect the required line quality information. -%Its preimage, -%i.e. the space of geometric entities which numerization matches this -%blurred segment, may be used to compute some confidence level in the delivered -%3D interpretations, as a promising extension of former works -%on discrete epipolar geometry \cite{NatsumiAl08}. The present work aims at designing a flexible tool to detect blurred segments with optimal thickness and orientation in gray-level images for as well supervised as unsupervised contexts. User-friendly solutions are sought, with ideally no parameter to set, or at least quite few values with intuitive meaning. -%A first attempt was already made in a previous work \cite{KerautretEven09} An interactive tool was already designed for live line extractions in gray-level images \cite{KerautretEven09}. But the segment thickness was initially fixed by the user and not estimated, @@ -63,11 +49,8 @@ Here, the limitations of this first detector are solved by the introduction of two new concepts: (i) adaptive directional scans %(ADS) designed to better track the detected line; -%get some compliance to the unpredictable orientation problem; (ii) control of assigned thickness %(CAT) to bound its scattering. -% intended to derive more reliable information on the -%line orientation and quality. As a side effect, these two major evolutions also led to a noticeable improvement of the time performance of the detector. They are also put forward within a global line extraction algorithm diff --git a/Article/main.tex b/Article/main.tex index 6b50c6da3db9fad1ab9583436f1b0034e8f4bf6b..0371d6b13327d8cec2ff661cb3821f07c4c78e7f 100755 --- a/Article/main.tex +++ b/Article/main.tex @@ -24,10 +24,6 @@ \begin{document} \begin{frontmatter} -% \title{Straight edge detection -% based on adaptive directional tracking of blurred segments} -%% \title{Fast Directional Tracking of Thick Line Segments} - %% Proposition BK: \title{Thick Line Segment Detection with Fast Directional Tracking} \author{Philippe Even\inst{1} \and Phuc Ngo\inst{1} \and diff --git a/Article/method.tex b/Article/method.tex index 81759b615091106dd28b3750486c3b32c01b2edc..96b7f8fe7c938acb57d5176d3c602324a4123abc 100755 --- a/Article/method.tex +++ b/Article/method.tex @@ -83,11 +83,6 @@ Output segment $\mathcal{B}'$ is finally accepted based on application criteria. Final length and sparsity thresholds can be set accordingly. They are the only parameters of this local detector, together with the input assigned thickness $\varepsilon_0$. -%Too short, too sparse or too fragmented segments -%can be rejected. Length, sparsity or fragmentation thresholds are -%intuitive parameters left at the end user disposal. -%None of these tests are activated for the experimental stage in order -%to put forward achievable performance. \subsection{Adaptive directional scan} \label{subsec:ads} @@ -157,7 +152,6 @@ S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I} \left| \begin{array}{l} \vec{V}(\mathcal{N}_i) \cdot \vec{V}(\mathcal{D}_0) = 0 \\ \wedge~ h(\mathcal{N}_i) = h(\mathcal{N}_{i-1}) + p(\mathcal{D}_0) \\ -%\wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}), \wedge~ \mathcal{D}_{i} = \mathcal{D}^{C_{i-1}, \vec{D}_{i-1}, \mu_{i-1}}, i > \lambda \end{array} \right. \right\} @@ -165,7 +159,6 @@ i > \lambda where $C_{i}$, $\vec{D}_{i}$ and $\mu_{i}$ are respectively a position, a director vector and a thickness observed at iteration $i$, used to update the scan strip and lines in accordance to \RefEq{eq:dsdef2}. -%In the scope of the present detector, The last clause expresses the update of the scan bounds at iteration $i$: $C_{i-1}$, $\vec{D}_{i-1}$ and $\mu_{i-1}$ are respectively the intersection of the input selection and the central line of $\mathcal{B}_{i-1}$, @@ -204,61 +197,6 @@ edge that he wants to extract from the image. The detection method previously described is continuously run during mouse dragging and the output blurred segment is displayed on-the-fly. -%The method is quite sensitive to the local conditions of the initial detection -%so that the output blurred segment may be quite unstable. -%In order to temper this undesirable behavior for interactive applications, -%the initial detection can be optionally run twice, the second fast scan being -%aligned on the first detection output. -%This strategy provides a first quick analysis of the local context before -%extracting the segment and contributes to notably stabilize the overall -%process. -% -%When selecting candidates for the fine detection stage, an option, called -%{\it edge selection mode}, is left to also filter the points according to -%their gradient direction. -%In {\it main edge selection mode}, only the points with a gradient vector -%in the same direction as the start point gradient vector are added to the -%blurred segment. -%In {\it opposite edge selection mode}, only the points with an opposite -%gradient vector direction are kept. -%In {\it line selection mode} this direction-based filter is not applied, -%and all the candidate points are aggregated into a same blurred segment, -%whatever the direction of their gradient vector. -%As illustrated on \RefFig{fig:edgeDir}, this mode allows the detection of -%the two opposite edges of a thin straight object. -% -%\begin{figure}[h] -%\center -% \begin{tabular}{c@{\hspace{0.2cm}}c} -% \includegraphics[width=0.4\textwidth]{Fig_method/selectLine_zoom.png} & -% \includegraphics[width=0.4\textwidth]{Fig_method/selectEdges_zoom.png} -% \end{tabular} -% \begin{picture}(1,1)(0,0) -% {\color{dwhite}{ -% \put(-220,-14.5){\circle*{8}} -% \put(-74,-14.5){\circle*{8}} -% }} -% \put(-222.5,-17){a} -% \put(-76.5,-17){b} -% \end{picture} -% \caption{Blurred segments obtained in \textit{line} or \textit{edge -% selection mode} as a result of the gradient direction filtering -% when adding points. -% In \textit{line selection mode} (a), a thick blurred segment is -% built and extended all along the brick join. -% In \textit{edge selection mode} (b), a thin blurred segment is -% built along one of the two join edges. -% Both join edges are detected with the \textit{multi-selection} -% option. -% On that very textured image, they are much shorter than the whole -% join detected in line selection mode. -% Blurred segment points are drawn in black color, and the enclosing -% straight segments in blue.} -% \label{fig:edgeDir} -%\end{figure} - -%\subsection{Multiple blurred segments detection} - An option, called {\it multi-detection} (Algorithm 1), allows the detection of all the segments crossed by the input stroke $AB$. In order to avoid multiple detections of the same edge, an occupancy mask, @@ -281,10 +219,6 @@ segments $\mathcal{B}_j'$ at the end of each successful detection blurred segment extension in the fine tracking step. \end{enumerate} -%In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection -%algorithm is executed twice, first in main edge selection mode, then -%in opposite edge selection mode. - \subsection{Automatic blurred segment detection} An unsupervised mode is also proposed to automatically detect all the @@ -305,34 +239,3 @@ for testing from the online demonstration and from a \textit{GitHub} source code repository: \\ \href{https://github.com/evenp/FBSD}{ \small{\url{https://github.com/evenp/FBSD}}} - -%\input{Fig_method/algoAuto} - -%The behavior of the unsupervised detection is depicted through the two -%examples of \RefFig{fig:auto}. -%The example on the left shows the detection of thin straight objects on a -%circle with variable width. -%On the left half of the circumference, the distance between both edges -%exceeds the initial assigned width and a thick blurred segment is build -%for each of them. Of course, on a curve, a continuous thickenning is -%observed untill the blurred segment minimal width reaches the initial -%assigned width. -%On the right half, both edges are encompassed in a common blurred segment, -%and at the extreme right part of the circle, the few distant residual points -%are grouped to form a thick segment. -% -%The example on the right shows the limits of the edge detector on a picture -%with quite dense repartition of gradient. -%All the salient edges are well detected but they are surrounded by a lot -%of false detections, that rely on the presence of many local maxima of -%the gradient magnitude with similar orientations. -% -%\begin{figure}[h] -%\center -% \begin{tabular}{c@{\hspace{0.2cm}}c} -% \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} & -% \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png} -% \end{tabular} -% \caption{Automatic detection of blurred segments.} -% \label{fig:auto} -%\end{figure} diff --git a/Article/notions.tex b/Article/notions.tex index bf161823d1948828061bba22d528984e4213c490..e1a660b589594eedd94fa7a5935bbc8d9e67a231 100755 --- a/Article/notions.tex +++ b/Article/notions.tex @@ -23,7 +23,6 @@ When $\nu = p(\mathcal{L})$, then $\mathcal{L}$ is the narrowest 8-connected line and is called a {\it naive line}. The {\it thickness} $\mu = \frac{\nu}{max(|a|,|b|)}$ of -% the digital straight line $\mathcal{L}(a,b,c,\nu)$ is the minimum of the vertical and horizontal distances between lines $ax + by = c$ and $ax + by = c + \nu$. @@ -41,13 +40,6 @@ A linear-time algorithm to recognize a blurred segment of assigned thickness $\varepsilon$ \cite{DebledAl05} is used in this work. It is based on an incremental growth of the convex hull of the blurred segment when adding each point $P_i$ successively. -%The minimal width $\mu$ of the blurred segment $\mathcal{B}$ is the -%arithmetical width of the narrowest digital straight line that contains -%$\mathcal{B}$. -%It is also the minimal width of the convex hull of $\mathcal{B}$, -%that can be computed by Melkman's algorithm \cite{Melkman87}. -%The enclosing digital segment $E(\mathcal{B})$ is the section of this -%optimal digital straight line bounded by the end points of $\mathcal{B}$. As depicted on \RefFig{fig:bs}, the extension of the blurred segment $\mathcal{B}_{i-1}$ of assigned thickness $\varepsilon$ and thickness $\mu_{i-1}$ at step $i-1$ with a new input @@ -83,7 +75,6 @@ DS = \left\{ S_i = \mathcal{D} \cap \mathcal{N}_i \cap \mathcal{I} \vec{V}(\mathcal{N}_i) \cdot \vec{V}(\mathcal{D}) = 0 \\ \wedge~ h(\mathcal{N}_i) = h(\mathcal{N}_{i-1}) + p(\mathcal{D}) \end{array} \right. \right\} -%S_i = \mathcal{D} \cap \mathcal{N}_i, \mathcal{N}_i \perp \mathcal{D} \end{equation} In this definition, the clause $\vec{V}(\mathcal{N}_i) \cdot \vec{V}(\mathcal{D}) = 0$ @@ -102,7 +93,6 @@ At each iteration $i$, the scans $S_i$ and $S_{-i}$ are successively processed. \begin{figure}[h] \center -% \input{Fig_notions/fig} \includegraphics[width=0.8\textwidth]{Fig_notions/scanstrip.eps} \begin{picture}(1,1)(0,0) \thicklines @@ -154,11 +144,6 @@ $\mathcal{N}_i^{A,B}$: \end{array} \right. \end{equation} -%The scan lines length is $d_\infty(AB)$ or $d_\infty(AB)-1$, where $d_\infty$ -%is the chessboard distance ($d_\infty = max (|d_x|,|d_y|)$). -%In practice, this difference of length between scan lines is not a drawback, -%as the image bounds should also be processed anyway. - A directional scan can also be defined by a central point $C(x_C,y_C)$, a direction $\vec{D}(X_D,Y_D)$ and a minimal thickness $w$. If we note