diff --git a/ICHLL_Brenon.md b/ICHLL_Brenon.md
index a9ca58eedb07f10368284b320d979ec1de5e7dba..d61cbde86af179b555267f27c1b18523ceeabc5b 100644
--- a/ICHLL_Brenon.md
+++ b/ICHLL_Brenon.md
@@ -675,7 +675,7 @@ information to the reader about the current page number along with the headwords
 of the first and last articles appearing on the page. Those can be encoded by
 `<fw/>` elements ("forme work") which `place` and `type` attributes should be
 set to position them on the page and identify their function if it has been
-recognized (those short elements on the border of pages are the ones typically
+recognised (those short elements on the border of pages are the ones typically
 prone to suffer damages or be misread by the OCR).
 
 Finally there are other TEI elements useful to represent "events" in the flow of
@@ -692,7 +692,7 @@ may and should be used with our encoding scheme.
 The reference implementation for this encoding scheme is the program
 soprano[^soprano] developed within the scope of project DISCO-LGE to
 automatically identify individual articles in the flow of raw text from the
-column and to encode them into XML-TEI files. Though this software has already
+columns and to encode them into XML-TEI files. Though this software has already
 been used to produce the first TEI version of *La Grande Encyclopédie*, it
 doesn't yet follow the above specification perfectly. Here is for instance the
 encoded version of article "Cathète" currently it produces:
@@ -702,9 +702,29 @@ encoded version of article "Cathète" currently it produces:
 ![](snippets/cathète_current.png)
 
 The headword detection system is not able to capture the subject indicators yet
-so it appears outside of the `<head/>` element. Likewise, since the detection of
-titles at the begining of each section isn't complete, no structure analysis is
-performed on the content of the article
+so it appears outside of the `<head/>` element. No work is performed either to
+expand abbreviations and encode them as such, or to distinguish between domain
+and people names.
+
+Likewise, since the detection of titles at the begining of each section isn't
+complete and so no structure analysis is performed on the content of the article
+which is placed directly under the article's `<div/>` element at the moment
+instead of under a set of nested `<div/>` elements, the topmost having a `type`
+attribute of `sense`. The paragraphs are not yet identified and hence not
+encoded.
+
+However, the figures and their captions are already handled correctly when they
+occur. The encoder also keeps track of the current lines, pages, and columns and
+inserts the corresponding empty elements (`<lb/>`, `<pb/>` or `<cb/>`) and
+numbers pages so that the numbering corresponding to the physical pages are
+available, as compared to the "high-level" pages numbers inserted by the
+editors, which start with an offset because the first, blank or almost empty
+pages at the begining of each book do not have a number and which sometimes have
+gaps when a full-page geographical map is inserted since those are printed
+separately on a different folio which remains outside of the textual numbering
+system. The place at which these layout-related elements occur is determined by
+the place where the OCR software detected them and by the reordering performed
+by `soprano` when inferring the reading order before segmenting the articles.
 
 ## The constraints of automated processing