From 8b0a27ca1135f8f0b260a0a1649a3c4f5bf1df74 Mon Sep 17 00:00:00 2001
From: even <philippe.even@loria.fr>
Date: Sun, 7 Jul 2019 12:59:46 +0200
Subject: [PATCH] Answers: July 7th remarks added

---
 Article/expe.tex           |  3 ++-
 Article/intro.tex          |  2 +-
 Article/method.tex         | 17 +++++++++-------
 Methode/answerToReview.tex | 41 +++++++++++++++++++++++---------------
 4 files changed, 38 insertions(+), 25 deletions(-)

diff --git a/Article/expe.tex b/Article/expe.tex
index e6f9de0..f18a024 100755
--- a/Article/expe.tex
+++ b/Article/expe.tex
@@ -5,7 +5,8 @@
 In the experimental stage, the proposed approach is validated through
 comparisons with other recent line detectors: LSD \cite{GioiAl10},
 ED-Lines \cite{AkinlarTopal12} and CannyLines \cite{LuAl15},
-\modifRev {also written in C-like language and without any parameter settings}.
+\modifRev {written in C or C++ language and
+without any parameter settings}.
 Only LSD provides a thickness value
 based on the width of regions with same gradient direction.
 This information does not match the line sharpness or scattering quality
diff --git a/Article/intro.tex b/Article/intro.tex
index bbbf665..a34f14b 100755
--- a/Article/intro.tex
+++ b/Article/intro.tex
@@ -55,7 +55,7 @@ to bound its scattering.
 As a side effect, these two major evolutions also led to a noticeable
 improvement of the time performance of the detector.
 They are also put forward within a global line extraction algorithm
-which can be evaluated through an online demonstration at :
+which can be evaluated through an online demonstration at:
 \href{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/FBSD_IPOLDemo}{
 \small{\url{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/FBSD_IPOLDemo}}}
 
diff --git a/Article/method.tex b/Article/method.tex
index dae09e2..8b49f78 100755
--- a/Article/method.tex
+++ b/Article/method.tex
@@ -177,17 +177,20 @@ the orientation of $\mathcal{D}_0$ is wrongly estimated (\RefFig{fig:escape} c).
 
 The assigned thickess $\varepsilon$ to the blurred segment recognition
 algorithm is initially set to a large value $\varepsilon_0$ in order to
-allow the detection of large blurred segments.
+allow the detection of
+%large blurred segments.
+\modifRev{thick} blurred segments.
 Then, when no more augmentation of the blurred segment thickness is observed
 after $\tau$ iterations ($\mu_{i+\tau} = \mu_i$), it is set to
 \modifRev{the observed thickness augmented by a half pixel tolerance factor,
-able to take into account all the possible discrete lines
-which digitization fits to the selected points.}
+in order to take into account all the possible discrete lines
+which digitization fits to the selected points:}
 %a much stricter value able to circumscribe the possible interpretations
 %of the segment, that take into account the digitization margins:
-\begin{equation}
-\varepsilon = \mu_{i+\tau} + \frac{\textstyle 1}{\textstyle 2}
-\end{equation}
+%\begin{equation}
+\modifRev{$\varepsilon = \mu_{i+\tau} + \frac{\textstyle 1}{\textstyle 2}$.}
+%\end{equation}
+
 This strategy aims at preventing the incorporation of spurious outliers in
 further parts of the segment.
 Setting the observation distance to a constant value $\tau = 20$ seems
@@ -199,7 +202,7 @@ In supervised context, the user draws an input stroke across the specific
 edge that he wants to extract from the image.
 The detection method previously described is continuously run during mouse
 dragging and the output blurred segment is displayed on-the-fly.
-\modifRev{More details about supervised mode are available
+\modifRev{Details about the supervised mode are discussed
 in \cite{KerautretEven09}.}
 
 An option, called {\it multi-detection} (Algorithm 1), allows the
diff --git a/Methode/answerToReview.tex b/Methode/answerToReview.tex
index 8177d4a..94f5b32 100755
--- a/Methode/answerToReview.tex
+++ b/Methode/answerToReview.tex
@@ -34,7 +34,7 @@ We would like to thank the editors and reviewers for their work and
 for their constructive comments, questions and suggestions.
 Because the paper already reaches the 10 pages limit, and in order to
 avoid removal of possibly valuable contents for paper understanding,
-additional data are put on the github, that is referenced in the paper
+additional data are added to the github, that is referenced in the paper
 ({\tt https://github.com/evenp/FBSD}).
 A detailed list of the changes is given below with also some specific
 answers to raised questions.
@@ -72,6 +72,9 @@ works are outperformed in terms of two evalutation metrics, i.e. C and L/N.
 encourages reproducibility.
 \item Writing quality of the paper is good.
 \end{itemize}
+\begin{answer}
+Thanks.
+\end{answer}
 
 \item {\bf 3. Weaknesses. Consider significance of key ideas, experiments,
 writing quality. Clearly explain why these are weak aspects of the paper,
@@ -88,13 +91,14 @@ latest of which was published in 2015. \\
 
 \begin{answer}
 Thanks for pointing this out. We have added this reference to the paper.
-However, we do not consider it in the experiments,
-because contrarily to the other methods in which no parameter has to be set
-for comparisons, this method has several ranking level parameters which could
-largely influence achieved results.
-Moreover, the code is written in Matlab. To ensure a fair comparison of time
-performance, it would require a complete re-programming in C-like language,
-that could produce possible rewritting bias. \\
+However, we prefer let the evaluations to be completed in an extended journal
+version for several reasons:
+(i) the source code provided by the authors is written in Matlab, that
+could penalize it for time comparisons (other codes are written in C or C++
+language);
+(ii) this method presents some important ranking level parameters which
+could largely influence achieved results,
+and it should require more time to ensure fair comparisons. \\
 We briefly mention it at the beginning of section 4.
 \end{answer}
 
@@ -110,7 +114,8 @@ then we exclude possible one-line steps (well-known aliasing effect) for
 nearly horizontal lines.
 This value of 1/2 is assumed to bound all the possible expansions of the
 observed line. \\
-The text was changed to precise the role of this half pixel margin.
+The text was changed in section 3.4 to precise the role of this half pixel
+margin.
 But of course, we have no space left to discuss all these discrete geometry
 considerations in the paper.
 \end{answer}
@@ -119,12 +124,14 @@ considerations in the paper.
 that were used in the experiments and the performance of both versions of
 the method obtained on them ?
 \begin{answer}
-Due to page limitations, we could not add any figure nor respective
-performance result in the paper. However, a couple of examples of synthesized
-images is already available in the mentioned github, and we have completed the
-table with associated results. In accordance to the measured standard
-deviations obtained on the whole set of 1000 randomly generated images,
-large variations can be observed in such results on individual images.
+Due to page limitations (the organizers rather suggested us to add
+complementary materials in a web page), we could not add any figure nor
+respective performance result in the paper. However, a couple of examples
+of synthesized images is already available in the mentioned github, and
+we have completed the table with associated results. In accordance to the
+measured standard deviations obtained on the whole set of 1000 randomly
+generated images, large variations can be observed in such results on
+individual images.
 \end{answer}
 
 \item What is understood from the paper is the performance results presented
@@ -166,7 +173,8 @@ Therefore the initial assigned thickness is set to a greater value : 7. \\
 The other detectors aim at providing thin lines and may reject too
 scattered image lines. To adapt to this behavior, we retrict the detection
 to thin lines using the initial assigned thickness to 3 pixels. \\
-The text of the paper has been precised accordingly.
+The text of the paper has been precised accordingly at the top of page IX
+and in the paragraph next to Table 1.
 \end{answer}
 
 \item I would like to see the performance of the previous version of the
@@ -212,6 +220,7 @@ of the art performaonces reported by recent papers.
 We would just like to insist on the fact that our method additionnally
 provides a measure of the line thickness without degrading other performance
 with respect to some other recent detectors.
+Moreover, the automatic detection mode is another novelty described here.
 \end{answer}
 \end{itemize}
 \end{itemize}
-- 
GitLab