OurBigBook
The OurBigBook Project is creating the ultimate tool to publish textbooks/personal knowledge bases/Zettelkasten/digital gardens in the learn in public philosophy, and to be our best shot yet at the final real-world Encyclopedia Galactica by allowing effective mind melding/collective intelligence.
Figure 1. Logo of the OurBigBook Project.
Mission: to live in a world where you can learn university-level mathematics, physics, chemistry, biology and engineering from perfect free open source books that anyone can write to get famous.
Ultimate goal: destroy the currently grossly inefficient education system and replace it with a much more inspiring system where people learn what they want as fast as possible to reach their goals faster without so much useless pain.
How to get there: create a website that incentivizes learners (notably university students taking courses) to write freely licensed university-level natural science books in their own words for free. Their motivation for doing that are:
  • getting their knowledge globally recognized and thus better jobs
  • improving the world
  • learning by teaching
Notable features:
The OurBigBook Web website is the main tool of the project. OurBigBook CLI is another complementary tool.
You can donate to the OurBigBook Project to sponsor its development in the following ways:
We are happy to discuss paid contracts to implement specific features, to get in touch see: contact.

2. Quick start

words: 104
The following sections cover different ways to use tools from the OurBigBook:

3.1. Features

words: 1k

3.2. Design goals

words: 1k articles: 4
OurBigBook is designed entirely to allow writing complex professional HTML and PDF scientific books, blogs, articles and encyclopedias.
OurBigBook aims to be the ultimate LaTeX "killer", allowing books to be finally published as either HTML or PDF painlessly (LaTeX being only a backend to PDF generation).
It aims to be more powerful and saner and than Markdown and Asciidoctor.

3.2.1. Saner

words: 417
Originally, OurBigBook was is meant to be both saner and more powerful than Markdown and Asciidoctor.
But alas, as Ciro started implementing and using it, he started to bring some Markdown insanity he missed back in.
And so this "degraded" slightly into a language slightly saner than Asciidoctor but with an amazing Node.js implementation that makes it better for book writing and website publishing.
Notably, we hope that our escaping will be a bit saner backslash escapes everything instead of Asciidoctor's "different escapes for every case" approach: github.com/asciidoctor/asciidoctor/issues/901
But hopefully, having starting from a saner point will still produce a saner end result, e.g. there are sane constructs for every insane one.
It is intended that this will be an acceptable downside as OurBigBook will be used primarily large complex content such as books rather than forum posts, and will therefore primarily written either:
For example, originally OurBigBook had exactly five magic characters, with similar functions as in LaTeX:
  • \ backslash to start a macro, like LaTeX
  • { and }: left and right square brackets to delimit optional macro arguments
  • [ and ]: left and right curly braces bracket to start an optional arguments
and double blank newlines for paragraphs if you are pedantic, but this later degenerated into many more with insane macro shortcuts.
We would like to have only square brackets for both optional and mandatory to have even less magic characters, but that would make the language difficult to parse for computer and humans. LaTeX was right for once!
This produces a very regular syntax that is easy to learn, including doing:
  • arbitrary nesting of elements
  • adding arbitrary properties to elements
This sanity also makes the end tail learning curve of the endless edge cases found in Markdown and Asciidoctor disappear.
The language is designed to be philosophically isomorphic to HTML to:
  • further reduce the learning curve
  • ensure that most of HTML constructs can be reached, including arbitrary nesting
More precisely:
  • macro names map to tag names, e.g.: \\a to <a
  • one of the arguments of macros, maps to the content of the HTML element, and the others map to attributes.
    E.g., in a link:
    \a[http://example.com][Link text\]
    the first macro argument:
    http://example.com
    maps to the href of <a, and the second macro argument:
    Link text
    maps to the internal content of <a>Link text<>.
The high sanity of OurBigBook, also makes creating new macro extensions extremely easy and intuitive.
All built-in language features use the exact same API as new extensions, which ensures that the extension API is sane forever.
Markdown is clearly missing many key features such as block attributes and cross references, and has no standardized extension mechanism.
The "more powerful than Asciidoctor" part is only partially true, since Asciidoctor is very featureful can do basically anything through extensions.
The difference is mostly that OurBigBook is completely and entirely focused on making amazing scientific books, and so will have key features for that application out-of-the box, notably:
and we feel that some of those features have required specialized code that could not be easily implemented as a standalone macro.
Another advantage over Asciidoctor is that the reference implementation of OurBigBook is in JavaScript, and can therefore be used on browser live preview out of the box. Asciidoctor does Transpile to JS with Opal, but who wants to deal with that layer of complexity?
Static wiki generators: this is perhaps the best way of classifying this project :-)
Static book generators:
Less related but of interest, similar philosophy to what Ciro wants, but no explicitly reusable system:
Ciro Santilli developed OurBigBook to perfectly satisfy his writing style, which is basically "create one humongous document where you document everything you know about a subject so everyone can understand it, and just keep adding to it".
cirosantilli.com is the first major document that he has created in OurBigBook.
He decided to finally create this new system after having repeatedly facing limitations of Asciidoctor which were ignored/wontfixed upstream, because Ciro's writing style is not as common/targeted by Asciidoctor.
Following large documents Ciro worked extensively on:
made the limitations of Asciidoctor clear to Ciro, and were major motivation in this work.
The key limitations have repeatedly annoyed Ciro were:
  • cannot go over header level 6, addressed at: unlimited header levels
  • the need for -S, --split-headers to avoid one too large HTML output that will never get indexed properly by search engines, and takes a few seconds to load on any browser, which is unacceptable user experience
OurBigBook Markup is the lightweight markup language used in the OurBigBook Project project.
It works both on the OurBigBook Web dynamic website, and on OurBigBook CLI static websites from the command line.
OurBigBook Markup files use the .bigb extension.
Paragraphs are made by simplying adding an empty line, e.g.:
My first paragraph.

And now my second paragraph.

Third one to finish.
which renders as:
My first paragraph.
And now my second paragraph.
Third one to finish.
Headers are created by starting the line with equal signs. The more equal signs the deeper you are, e.g.:
= Animal

== Mammal

=== Dog

=== Cat

== Bird

=== Pigeon

=== Chicken
On OurBigBook Web, the toplevel header of each page goes into a separate title box, so there things would just look like:
  • title box: "Animal"
  • body:
    == Mammal
    
    === Dog
    
    === Cat
    
    == Bird
    
    === Pigeon
    
    === Chicken
You can can use any header as a tag of any other header, e.g.:
= Animal

== Dog
{tag=Cute animal}

== Turtle
{tag=Ugly animal}

== Animal cuteness

=== Cute animal

=== Ugly animal
Headers have several powerful features that you can read more about under \H arguments, e.g. \H synonym argument and \H disambiguate argument.
To link to any of your other pages, you can use angle brackets (less than/greater than) signs:
I have a <cute animal>. <Birds> are too noisy.
Note how capitalization and pluralization generally just work.
To use a custom link text on a reference, use the following syntax:
I have a <cute animal>[furry animal]. <Birds>[feathery animals] are too noisy.
External links can be input directly as:
This is a great website: https://example.com

I really like https://example.com[this website].
which renders as:
This is a great website: example.com
I really like this website.
Code blocks are done with backticks `. With just one backtick, you get a code block inside the text:
The function call `f(x + 1, "abc")` is wrong.
which renders as:
The function call f(x + 1, "abc") is wrong.
and with two ore more backticks you get a code block on its own line, and possibly with multiple code lines:
The function:
``
function f(x, s) {
  return x + s
}
``
is wrong.
which renders as:
The function:
function f(x, s) {
  return x + s
}
is wrong.
Mathematics syntax is very similar to code blocks, you can just enter you LaTeX code in it:
The number $\sqrt{2}$ is irrational.

The same goes for:
$$
\frac{1}{\sqrt{2}}
$$
which renders as:
The number is irrational.
The same goes for:
We also have a bunch of predefined macros from popular packages, e.g. \dv from the physics package for derivatives:
$$
\dv{x^2}{x} = 2x
$$
which renders as:
You can refer to specific equations like this:
As shown in <equation Very important equation>, this is true.

$$
\frac{1}{\sqrt{2}}
$$
{title=Very important equation}
which renders as:
As shown in Equation 3. "Very important equation", this is true.
Equation 3. Very important equation.
Images and videos are also easy to add and refer to:
As shown at <image Cute chicken chick>, chicks are cute.

\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/H%C3%BChnerk%C3%BCken_02.jpg/800px-H%C3%BChnerk%C3%BCken_02.jpg?20200716091201]
{title=Cute chicken chick}

\Video[https://www.youtube.com/watch?v=j_fl4xoGTKU]
{title=Top Down 2D Continuous Game by Ciro Santilli (2018)}
which renders as:
As shown at Figure 2. "Cute chicken chick", chicks are cute.
Figure 2. Cute chicken chick. Source.
Video 3. Top Down 2D Continuous Game by Ciro Santilli (2018) Source.
Images can take a bunch of options, about which you can read more about at image arguments. Most should be self explanatory, here is an image with a bunch of useful arguments:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/H%C3%BChnerk%C3%BCken_02.jpg/800px-H%C3%BChnerk%C3%BCken_02.jpg?20200716091201]
{title=Ultra cute chicken chick}
{description=
The chicken is yellow, and the hand is brown.

The background is green.
}
{border}
{height=400}
{source=https://commons.wikimedia.org/wiki/File:H%C3%BChnerk%C3%BCken_02.jpg}
which renders as:
Figure 3. Ultra cute chicken chick. Source.
The chicken is yellow, and the hand is brown.
The background is green.
Lists are written by starting the line with an asterisk *:
* first item
* second item
* and the third
which renders as:
  • first item
  • second item
  • and the third
A nested list:
* first item
  * first item version 1
  * first item version 2
    * first item version 2 1
    * first item version 2 2
* second item
* and the third
which renders as:
  • first item
    • first item version 1
    • first item version 2
      • first item version 2 1
      • first item version 2 2
  • second item
  • and the third
Lists items can contain any markup, e.g. paragraphs. You just need to keep the same number of spaces, e.g.:
* first item.

  Second paragraph of first item.

  And a third one.
* second item
  * second item v1

    Another paragraph in second item v1
  * second item v2
which renders as:
  • first item.
    Second paragraph of first item.
    And a third one.
  • second item
    • second item v1
      Another paragraph in second item v1
    • second item v2
Tables are not very different from lists. We use double pipes for headers ||, and a single pipe | for regular rows:
|| City
|| Sales

| Salt Lake City
| 124,00

| New York
| 1,000,000
which renders as:
CitySales
Salt Lake City124,00
New York1,000,000
To add a title we need to use an explicit \Table macro as in:
See <table Sales per city> for more information.

\Table
{title=Sales per city}
[
|| City
|| Sales

| Salt Lake City
| 124,00

| New York
| 1,000,000
]
which renders as:
See Table 1. "Sales per city" for more information.
Table 1. Sales per city.
CitySales
Salt Lake City124,00
New York1,000,000

4.2. Macro

words: 17k articles: 198
This section documents all OurBigBook macros.
Macros are magic commands that do cool stuff, e.g. \Image to create an image.
The most common macros also have insane macro shortcuts to keep the syntax shorter.
The general macro syntax is described at Section 4.3. "OurBigBook Markup syntax".
Insane autolink, i.e. the link text is the same as the link address:
The website http://example.com is cool. See also:

\Q[http://example.com/2]
which renders as:
The website example.com is cool. See also:
example.com/2
Exact parsing rules described at: Section 4.2.1.4. "Insane link parsing rules".
Note that the prefixes http:// and https:// are automatically removed from the displayed link, since they are so common that they woudly simply add noise.
Equivalent sane version:
The website \a[http://example.com] is cool.

\Q[\a[http://example.com/2]]
which renders as:
The website example.com is cool.
example.com/2
Insane link with custom text:
The website http://example.com[example.com] is cool.
which renders as:
The website example.com is cool.
Equivalent sane version:
The website \a[http://example.com][example.com] is cool.
which renders as:
The website example.com is cool.
If the custom text is empty, an autolink is generated. This is often useful if you want your link to be followed by punctuation:
The website is really cool: http://example.com[].
which renders as:
The website is really cool: example.com.
This could also be achieved with the sane syntax of course, but this pattern saves a tiny bit of typing.
Link with multiple paragraphs inside it:
\a[http://example.com][Multiple

paragraphs]
which renders as:
Link to a file in the current repository:
The file \a[index.js] is cool.
which renders as:
The file index.js is cool.
This links to a raw view of that file.
Link to a directory in the current repository:
The directory \a[file_demo] is cooler.
which renders as:
The directory file_demo is cooler.
This links to an output file that contains a generated directory listing of that directory.
The link target, e.g. in:
\a[http://example.com]
href equals http://example.com.
Important behaviours associated with this property for local links are detailed at Section 4.2.1.3. "\a external argument":
Analogous to the \x ref argument, e.g.:
Trump said this and that.https://en.wikipedia.org/wiki/Donald_Trump_Access_Hollywood_tape#Trump's_responses{ref}https://web.archive.org/web/20161007210105/https://www.donaldjtrump.com/press-releases/statement-from-donald-j.-trump{ref} Then he said that and this.https://en.wikipedia.org/wiki/Donald_Trump_Access_Hollywood_tape#Trump's_responses{ref}https://web.archive.org/web/20161007210105/https://www.donaldjtrump.com/press-releases/statement-from-donald-j.-trump{ref}
which renders as:
Trump said this and that.[ref][ref] Then he said that and this.[ref][ref]
4.2.1.3. \a external argument
words: 815 articles: 7
If given and true, forces a the link to be an external link.
Otherwise, the external is automatically guessed based on the address given as explained at Section 4.2.1.3.3. "External link".
Common use cases for the external argument is to link to non OurBigBook content in the curent domain, e.g.:
The \a external argument can be used to refer to the root of the domain. E.g. suppose that we have a subdirectory deployment under https://mydomain.com/subdir/. Then:
  • \a[/somepath] refers to the directory /subdir/somepath
  • \a[/somepath]{external} refers t othe directory /somepath
TODO test if it works. But we want it to be possible to deploy OurBigBook CLI static websites on subdirectories, e.g.:
https://mydomain.com/subdir/
https://mydomain.com/subdir/mathematics
If it doesn't work, it should be easy to make it work, as we use relative links almost everywhere already. Likely there would only be some minor fixes to the --template arguments.
An external link is a link that points to a resource that is not present in the curent OurBigBook project sources.
By default, most links are internal links, e.g. it is often the case in computer programming tutorials that we want to refer to source files in the current directory. So from our README.bigb, we could want to write something like:
Have a look at this amazing source file: \a[index.js].
which renders as:
Have a look at this amazing source file: index.js.
and here \a[ourbigbook] is a internal link.
A typicial external link is something like:
This is great website: https://cirosantilli.com
which renders as:
This is great website: cirosantilli.com
which points to an absolute URL.
OurBigBook considers a link relative by default if:
Therefore, the following links are external by default:
  • http://cirosantilli.com
  • https://cirosantilli.com
  • file:///etc/fstab
  • ftp://cirosantilli.com
and the following are internal by default:
  • index.js
  • ../index.js
  • path/to/index.js
  • /path/to/index.js. Note that paths starting with / refer to the root of the OurBigBook CLI deployment, not the root of the domain, see: link to the domain root path.
  • //example.com/path/to/index.js
A link being internal has the following effects
  • the correct relative path to the file is used when using nested scopes with -S, --split-headers. For example, if we have:
    = h1
    
    == h2
    {scope}
    
    === h3
    
    \a[index.js]
    then in split header mode, h3 will be rendered to h2/h3.html.
    Therefore, if we didn't do anything about it, the link to index.js would render as href="index.js" and thus point to h2/index.js instead of the correct index.js.
    Instead, OurBigBook automatically converts it to the correct href="../index.js"
  • the _raw directory prefix is added to the link
  • existence of the file is checked on compilation. If it does not exist, an error is given.
Implemented at: github.com/ourbigbook/ourbigbook/issues/87 as relative, and subsequently modified to the more accurate/useful external.
The _dir directory tree contains file listings of files in the _raw directory.
We originally wanted to place these listings under _raw itself, but this leads to unsolvable conflicts when there are files called index.html present vs the index.
Analogous to the _raw directory, but for the \H file argument.
OurBigBook Project places output files that are not the output of .bigb to .html conversion (i.e. .html output files) under the _raw/ prefix of the output.
Internal links then automatically add the _raw/ prefix to every link.
For example, consider an input directory that contains:
notindex.bigb
= Hello

Check out \a[myfile.c].

The source code for this file is at: \a[notindex.bigb].

\Image[myimg.png]
myfile.c
int i = 1;
myimg.png
Binary!
After conversion with:
ourbigbook .
the following files would exist in the output directory:
  • notindex.html: converted output of notindex.bigb
  • _raw/notindex.bigb: a copy of the input source code notindex.bigb
  • _raw/myfile.c: a copy of the input file myfile.c
  • _raw/myimg.png: a copy of the input file myimg.c
and all links/image references would work and automtically point to the correct locations under _raw.
Some live examples:
  • link to a file:
    The file \a[index.js] is cool.
    which renders as:
    The file index.js is cool.
  • link to a directory:
    The directory \a[file_demo] is cooler.
    which renders as:
    The directory file_demo is cooler.
The reason why a _raw prefix is needed it to avoid naming conflicts with OurBigBook outputs, e.g. suppose we had the files:
  • configure
  • configure.bigb
Then, in a server that omits the .html extension, if we didn't have _raw/ both configure.html and configure would be present under /configure. With _raw we instead get:
  • _raw/configure: the input /configure file
  • configure: the HTML
A URL with protocol is a URL that matches the regular expression ^[a-zA-Z]+://. The following are examples of URLs with protocol:
  • http://cirosantilli.com
  • https://cirosantilli.com
  • file:///etc/fstab
  • ftp://cirosantilli.com
The following aren't:
  • index.js
  • ../index.js
  • path/to/index.js
  • /path/to/index.js
  • //example.com/path/to/index.js. This one is a bit tricky. Web browsers would consider this as a protocol-relative URL, which technically implies a protocol, although that protocol would be different depending how you are viewing the file, e.g. locally through file:// vs on a with website https://.
    For simplicity's sake, we just consider it as a URL without protocol.
Insane start at any of the recognized protocols are the ones shown at: Section 4.4.3. "Known URL protocols".
  • http://
  • https://
absolutely anywhere if not escaped, e.g.:
ahttp://example.com
renders something like:
a <a href="http://example.com">
To prevent expansion, you have to escape the protocol with a backslash \\, e.g.:
\http://example.com
Empty domains like:
http://
don't becomes links however. But this one does:
http://a
Insane links end when any insane link termination character is found.
As a consequence, to have an insane link followed immediately by a punctuation like a period you should use an empty argument as in:
Check out this website: http://example.com[].
which renders as:
Check out this website: example.com.
otherwise the punctuation will go in it. Another common use case is:
As mentioned on the tutorial (http://example.com[see this link]).
which renders as:
As mentioned on the tutorial (see this link).
If you want your link to include one of the terminating characters, e.g. ], all characters can be escaped with a backslash, e.g.:
Hello http://example.com/\]a\}b\\c\ d world.
which renders as:
Note that the http://example.com inside \a[http://example.com] only works because we do some post-processing magic that prevents its expansion, otherwise the link would expand twice:
\P[http://example.com]

\a[http://example.com]
which renders as:
This magic can be observed with --help-macros by seeing that the href argument of the a macro has the property:
"elide_link_only": true,
The following characters are the "insane link termination characters":
  • space
  • newline \n
  • open or close square bracket [ or ]
  • open or close curly braces { or }
Insane cross references and insane topic links with a single word terminate if any of these characters are found, see also: Section 4.2.1.4. "Insane link parsing rules".
Some \b[bold] text.
which renders as:
Some bold text.
There is basically one application for this: poetry, which would be too ugly with code block due to fixed width font:
Paragraph 1 Line 1\br
Paragraph 1 Line 2\br

Paragraph 2 Line 1\br
Paragraph 2 Line 2\br
which renders as:
Paragraph 1 Line 1
Paragraph 1 Line 2
Paragraph 2 Line 1
Paragraph 2 Line 2

4.2.4. Code block (``, `, \C, \c)

words: 738 articles: 3
Inline code (code that should appear in the middle of a paragraph rather than on its own line) is done with a single backtick (`) insane macro shortcut:
My inline `x = 'hello\n'` is awesome.
which renders as:
My inline x = 'hello\n' is awesome.
and block code (code that should appear on their own line) is done with two or more backticks (``):
``
f() {
  return 'hello\n';
}
``
which renders as:
f() {
  return 'hello\n';
}
The sane version of inline code is a lower case c:
My inline \c[[x = 'hello\n']] is awesome.
which renders as:
My inline x = 'hello\n' is awesome.
and the sane version of block math is with an upper case C:
\C[[
f() {
  return 'hello\n';
}
]]
which renders as:
f() {
  return 'hello\n';
}
The capital vs lower case theme is also used in other elements, see: block vs inline macros.
If the content of the sane code block has many characters that you would need to escape, you will often want to use literal arguments, which work just like the do for any other argument. For example:
\C[[[
A paragraph.

\C[[
And now, some long, long code, with lots
of chars that you would need to escape:
\ [  ] {  }
]]

A paragraph.
]]]
which renders as:
A paragraph.

\C[[
And now, some long, long code, with lots
of chars that you would need to escape:
\ [  ] {  }
]]

A paragraph.
Note that the initial newline is skipped automatically in code blocks, just as for any other element, due to: argument leading newline removal, so you don't have to worry about it.
The distinction between inline \c and block \C code blocks is needed because in HTML, pre cannot go inside P.
We could have chosen to do some magic to differentiate between them, e.g. checking if the block is the only element in a paragraph, but we decided not to do that to keep the language saner.
And now a code block outside of \OurBigBookExample to test how it looks directly under the \Toplevel implicit macro:
Hello

Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
    HelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHello
Hello
Now with short description with math and underline:
Hello

Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
    HelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHello
Hello
Code 1. My long code!
And now a very long inline code: Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
4.2.4.1. \C argument
words: 88 articles: 2
See: Section 4.3.3.11.3. "description argument".
Example:
See the: <code Python hello world>.

``
print("Hello wrold")
``
{title=Python hello world}
{description=Note thow this is super short unlike the C hello world!}
which renders as:
print("Hello wrold")
Code 2. Python hello world. Note thow this is super short unlike the C hello world!
See: Section 4.3.3.11.2. "title argument".
Example:
See the: <code C hello world>.

``
#include <stdio.h>

int main(void) {
    puts("hello, world");
}
``
{title=C hello world}
which renders as:
#include <stdio.h>

int main(void) {
    puts("hello, world");
}
Code 3. C hello world.
The Comment and comment macros are regular macros that does not produce any output. Capitalization is explained at: Section 4.4.2. "Block vs inline macros".
You will therefore mostly want to use it with a literal argument, which will, as for any other macro, ignore any macros inside of it.
Before comment.

\Comment[[
Inside comment.
]]

After comment.
which renders as:
Before comment.
After comment.
And an inline one:
My inline \comment[[inside comment]] is awesome.

\comment[[inside comment]] inline at the start.
which renders as:
My inline is awesome.
inline at the start.
Insane with = (equal sign space):
= My h1

== My h2

=== My h3
Insane headers end at the first newline found. They cannot therefore contain raw newline tokens.
Equivalent sane:
\H[1][My h1]

\H[2][My h2]

\H[3][My h3]
Custom ID for cross references on insane headers:
= My h1
{id=h1}

== My h2
{id=h2}

=== My h3
{id=h3}
Sane equivalent:
\H[1][My h1]{id=h1}

\H[2][My h2]{id=h2}

\H[3][My h3]{id=h3}
4.2.6.1. Unlimited header levels
words: 139 articles: 11
There is no limit to how many levels we can have, for either sane or insane headers!
HTML is randomly limited to h6, so OurBigBook just renders higher levels as an h6 with a data-level attribute to indicate the actual level for possible CSS styling:
<h6 data-level="7">My title</h6>
The recommended style is to use insane headers up to h6, and then move to sane one for higher levels though, otherwise it becomes very hard to count the = signs.
To avoid this, we considered making the insane syntax be instead:
= 1 My h1
= 2 My h2
= 3 My h3
but it just didn't feel as good, and is a bit harder to type than just smashing = n times for lower levels, which is the most common use case. So we just copied markdown.
4.2.6.1.1. My h3
articles: 10
4.2.6.1.1.1. My h4
articles: 9
The very first header of a document can be of any level, although we highly recommend your document to start with a \H[1], and to contain exactly just one \H[1], as this has implications such as:
After the initial header however, you must not skip a header level, e.g. the following would give an error because it skips level 3:
= my 1

== my 1

==== my 4
4.2.6.3. The toplevel header
words: 381 articles: 2
The toplevel header of a OurBigBook file is its first header and the one with the lowest level, e.g. in a document with recommended syntax:
= Animal

== Dog

=== Bull Terrier

== Cat
the header = Animal is the tolevel header.
Being the toplevel header gives a header some special handling described in child sections of the section and elsewhere throughout this documentation.
The toplevel header is only defined if the document has only a single header of the highest level. e.g. like the following has only a single h2:
== My 2

=== My 3 1

=== My 3 2
Header IDs won't show for the toplevel level. For example, the headers would render like:
My 2

1. My 3 1

2. My 3 2
rather than:
1. My 2

1.2. My 3 1

1.2. My 3 2
This is because in this case, we guess that the h2 is the toplevel.
TODO: we kind of wanted this to be the ID of the toplevel header instead of the first header, but this would require an extra postprocessing pass (to determine if the first header is toplevel or not), which might affect performance, so we are not doing it right now.
When the OurBigBook input comes from a file (and not e.g. stdin), the default ID of the first header in the document is derived from the basename of the OurBigBook input source file rather than from its title.
This is specially relevant when including other files.
For example, in file named my-file.bigb which contains:
= Awesome ourbigbook file
]]
the ID of the header is my-file rather than awesome-ourbigbook-file. See also: automatic ID from title.
If the file is an index file other than the toplevel index file, then the basename of the parent directory is used instead, e.g. the toplevel ID of a file:
my-subdir/README.bigb
would be:
#my-subdir
rather than:
#README.bigb
For the toplevel index file however, the ID is just taken from the header itself as usual. This is done because you often can't general control the directory name of a project.
For example, a GitHub pages root directory must be named as <username>.github.io. And users may need to rename directories to avoid naming conflicts.
As a consequence of this, the toplevel index file cannot be included in other files.
4.2.6.4. \H arguments
words: 4k articles: 39
If given, makes the header capitalized by default on cross file references.
More details at: Section 4.2.20.2. "Cross reference title inflection".
This multiple argument marks given IDs as being children of the current page.
The effect is the same as adding the \x child argument argument to an under the header. Notably, such marked target IDs will show up on the tagged autogenerated header metadata section.
Example:
= Animal

== Mammal

=== Bat

=== Cat

== Wasp

== Flying animal
{child=bat}
{child=wasp}

\x[bat]

\x[wasp]
renders exactly as:
= Animal

== Mammal

=== Bat

=== Cat

== Wasp

== Flying animal

\x[bat]{child}

\x[wasp]{child}
The header child syntax is generally preferred because at some point while editing the content of the header, you might accidentally remove mentions to e.g. \x[bat]{child}, and then the relationship would be lost.
The \H tag argument does the same as the \x child argument but in the opposite direction.
4.2.6.4.3. \H file argument
words: 505 articles: 11
If given, the current section contains metadata about file or other resource with the given URL.
If empty, the URL of the file is extracted directly from the header. Otherwise, the given URL is used.
for example:
= path/to/myfile.c
{file}

An explanation of what this file is about.
renders a bit like:
= path/to/myfile.c
{id=_file/path/to/myfile.c}

An explanation of what this file is about.

\a[path/to/myfile.c]

``
// Contents of path/to/myfile.c
int main() {
  return 1;
}
``
so note how:
  • automatic ID from title does not normalize the path, e.g. it does not convert / to -.
    Also, a _file/ prefix is automatically added to the ID. This is needed with -S, --split-headers to avoid a collision between:
    • path/to/myfile.c: the actual file
    • _file/path/to/myfile.c: the metadata about that file. Note that locally the .html extension is added as in file/path/to/myfile.c.html which avoids the collision. But on a server deployment, the .html is not present, and there would be a conflict if we didn't add that file/ prefix.
  • a link to the is added automatically, since users won't be able to click it from the header, as clicking on the header will just link to the header itself
  • a preview is added. The type of preview is chosen as follows:
    • if the URL has an image extension, do an image preview
    • otherwise if the URL has a video extension, or is a YouTube URL, do a video preview
    • otherwise, don't show a preview, as we don't know anything sensible to show
In some cases however, especially when dealing with external URLs, we might want to have a more human readable title with a non empty file argument:
The video \x[tank-man-by-cnn-1989] is very useful.

= Tank Man by CNN (1989)
{c}
{file=https://www.youtube.com/watch?v=YeFzeNAHEhU}

An explanation of what this video is about.
which renders something like:
The video \x[tank-man-by-cnn-1989] is very useful.

= Tank Man by CNN (1989)
{id=_file/https://www.youtube.com/watch?v=YeFzeNAHEhU}

\Video[https://www.youtube.com/watch?v=YeFzeNAHEhU]

An explanation of what this video is about.
To make internal cross references to {file} headers, use the \x file argument.
To create a separate file with the \H file argument set on the toplevel header, you must put it under the special _file input directory. For example:
_file/path/to/myfile.txt.bigb
could contain something like:
= myfile.txt
{file}

Description of my amazing file.
and it would be associated to the file:
path/to/myfile.txt
The content of the header = myfile.txt is arbitrary, as it can be fully inferenced from the file path _file/path/to/myfile.txt.bigb. TODO add linting for it. Perhaps we should make adding a header be optional and auto-generate that header instead. But having at least an optional header is good as a way of being able to set header properties like tags.
See: Section 4.2.6.4.3.1. "\H file argument toplevel header".
This section contains some live demoes of the \H file argument.
An explanation of what this directory is about.
Going deeper.
An explanation of what this text file is about.
Another line.
file_demo/hello_world.js
#!/usr/bin/env node
console.log('hello world')
Going deeper.
file_demo/file_demo_subdir/hello_world.js
#!/usr/bin/env node
console.log('hello world subdir')
Large text files are not previewed, as they would take up too much useless vertical space and disk memory/bandwidth.
index.js was not rendered because it is too large (> 2000 bytes)
Binary files are not rendered.
file_demo/my.bin was not rendered because it is a binary file (contains \x00) of unsupported type (e.g. not an image).
/Tank_man_standing_in_front_of_some_tanks.jpg
An explanation of what this image is about.
Another line.
An explanation of what this video is about.
This boolean argument determines whether renderings of a header will have section numbers or not. This affects all of:
This option can be set by default for all files with:
By default, headers are numbered as in a book, e.g.:
= h1

== h2

=== h3

==== h4
renders something like:
= h1

Table of contents
* 1. h2
  * 1.1. h3
    * 1.1.1. h4

== 1. h2

=== 1.1. h3

==== 1.1.1. h4
However, for documents with a very large number of sections, or deeply nested headers those numbers start to be more noise than anything else, especially in the table of contents and you are better off just referring to IDs. E.g. imagine:
1.3.1.4.5.1345.3.2.1. Some deep level
When documents reach this type of scope, you can disable numbering with the numbered option.
This option can be set on any header, and it is inherited by all descendants.
The option only affects descendants.
E.g., if in the above example turn numbering off at h2:
= h1

== h2
{numbered=0}

=== h3

==== h4
then it renders something like:
= h1

Table of contents
* 1. h2
  * h3
    * h4

== 1. h2

=== h3

==== h4
The more common usage pattern to disable it on toplevel and enable it only for specific "tutorial-like sections". An example can be seen at:
which is something like:
= Huge toplevel wiki
{numbered=0}

== h2

=== A specific tutorial
{numbered}
{scope}

==== h4

===== h5
then it renders something like:
= Huge toplevel wiki

Table of contents
* h2
  * A specific tutorial
    * 1. h4
      * 1.1.  h5

== h2

=== A specific tutorial

==== 1. h4

===== 1.1. h5
Note how in this case the number for h4 is just 1. rather than 1.1.1.. We only show numberings relative to the first non-numbered header, because the 1.1. wouldn't be very meaningful otherwise.
In addition to the basic way of specifying header levels with an explicit level number as mentioned at Section 4.2.6. "Header (\H)", OurBigBook also supports a more indirect ID-based mechanism with the parent argument of the \H element.
We hightly recommend using parent for all but the most trivial documents.
For example, the following fixed level syntax:
= My h1

== My h2 1

== My h2 2

=== My h3 2 1
is equivalent to the following ID-based version:
= My h1

= My h2 1
{parent=my-h1}

= My h2 2
{parent=my-h1}

= My h3 2 1
{parent=my-h2-h}
The main advantages of this syntax are felt when you have a huge document with very large header depths. In that case:
  • it becomes easy to get levels wrong with so many large level numbers to deal with. It is much harder to get an ID wrong.
  • when you want to move headers around to improve organization, things are quite painful without a refactoring tool (which we intend to provide in the browser editor with preview), as you need to fix up the levels of every single header.
    If you are using the ID-based syntax however, you only have to move the chunk of headers, and change the parent argument of a single top-level header being moved.
Note that when the parent= argument is given, the header level must be 1, otherwise OurBigBook assumes that something is weird and gives an error. E.g. the following gives an error:
= My h1

== My h2
{parent=my-h1}
because the second header has level 2 instead of the required = My h2.
When scopes are involved, the rules are the same as those of internal reference resolution, including the leading / to break out of the scope in case of conflicts.
Like the \H child argument, parent also performs ID target from title on the argument, allowing you to use the original spaces and capitalization in the target as in:
= Flying animal

= Bat
{parent=Flying animal}
which is equivalent to:
= Flying animal

= Bat
{parent=flying-animal}
See also: Section 4.2.6.4.5.2. "Header explicit levels vs nesting design choice" for further rationale.
When mixing both \H parent argument and scopes, things get a bit complicated, because when writing or parsing, we have to first determine the parent header before resolving scopes.
As a result, the follow simple rules are used:
  • start from the last header of the highest level
  • check if the {parent=XXX} is a suffix of its ID
  • if not, proceed to the next smaller level, and so on, until a suffix is found
Following those rules for example, a file tmp.bigb:
= h1
{scope}

= h1 1
{parent=h1}
{scope}

= h1 1 1
{parent=h1-1}

= h1 1 2
{parent=h1-1}

= h1 1 3
{parent=h1/h1-1}

= h1 2
{parent=h1}
{scope}

= h1 2 1
{parent=h1-2}
{scope}

= h1 2 1 1
{parent=h1-2/h1-2-1}
will lead to the following header tree with --log headers:
= h1  tmp
== h2 1 tmp/h1-1
=== h3 1.1 tmp/h1-1/h1-1-1
=== h3 1.2 tmp/h1-1/h1-1-2
=== h3 1.3 tmp/h1-1/h1-1-3
== h2 2 tmp/h1-2
=== h3 2.1 tmp/h1-2/h1-2-1
==== h4 2.1.1 tmp/h1-2/h1-2-1/h1-2-1-1
Arguably, the language would be even saner if we did:
\H[My h1][

Paragraph.

\H[My h2][]
]
rather than having explicit levels as in \H[1][My h1] and so on.
But we chose not to do it like most markups available because it leads to too many nesting levels, and hard to determine where you are without tooling.
Ciro later "invented" (?) \H parent argument, which he feels reaches the perfect balance between the advantages of those two options.
4.2.6.4.6. \H scope argument
words: 504 articles: 6
In some use cases, the sections under a section describe inseparable parts of something.
For example, when documenting an experiment you executed, you will generally want an "Introduction", then a "Materials" section, and then a "Results" section for every experiment.
On their own, those sections don't make much sense: they are always referred to in the context of the given experiment.
The problem is then how to get unique IDs for those sections.
One solution, would be to manually add the experiment ID as prefix to every subsection, as in:
= Experiments

See: \x[full-and-unique-experiment-name/materials]

== Introduction

== Full and unique experiment name

=== Introduction
{id=full-and-unique-experiment-name/introduction}

See our awesome results: \x[full-and-unique-experiment-name/results]

For a more general introduction to all experiments, see: \x[introduction].

=== Materials
{id=full-and-unique-experiment-name/materials}

=== Results
{id=full-and-unique-experiment-name/results}
but this would be very tedious.
To keep those IDs shorter, OurBigBook provides the scope boolean argument property of headers, which works analogously to C++ namespaces with the header IDs.
Using scope, the previous example could be written more succinctly as:
= Experiments

See: \x[full-and-unique-experiment-name/materials]

== Introduction

== Full and unique experiment name
{scope}

=== Introduction

See our awesome results: \x[results]

For a more general introduction to all experiments, see: \x[/introduction].

=== Materials

=== Results
Note how:
  • full IDs are automatically prefixed by the parent scopes prefixed and joined with a slash /
  • we can refer to other IDs withing the current scope without duplicating the scope. E.g. \x[results] in the example already refers to the ID full-and-unique-experiment-name/materials
  • to refer to an ID outside of the scope and avoid name conflicts with IDs inside of the current scope, we start a reference with a slash /
    So in the example above, \x[/introduction] refers to the ID introduction, and not full-and-unique-experiment-name/introduction.
When nested scopes are involved, cross references resolution peels off the scopes one by one trying to find the closes match, e.g. the following works as expected:
= h1
{scope}

== h2
{scope}

=== h3
{scope}

\x[h2]
Here OurBigBook:
  • first tries to loop for an h1/h2/h3/h2, since h1/h2/h3 is the current scope, but that ID does not exist
  • so it removes the h3 from the current scope, and looks for h1/h2/h2, which is still not found
  • then it removes the h2, leading to h1/h2, and that one is found, and therefore is taken
Putting files in subdirectories of the build has the same effect as adding a scope to their top level header.
Notably, all headers inside that directory get the directory prepended to their IDs.
The toplevel directory is determined as described at: the toplevel index file.
4.2.6.4.6.3. Test scope 1
words: 10 articles: 2
For fun and profit.
4.2.6.4.6.3.1. Test scope 2
words: 6 articles: 1
Let's break this local link: ourbigbook.
When the toplevel header is given the scope property OurBigBook automatically uses the file path for the scope and heaves fragments untouched.
For example, suppose that file full-and-unique-experiment-name contains:
= Full and unique experiment name
{scope}

== Introduction

== Materials
In this case, multi-file output will generate a file called full-and-unique-experiment-name.html, and the URL of the subsections will be just:
  • full-and-unique-experiment-name.html#introduction
  • full-and-unique-experiment-name.html#materials
instead of
  • full-and-unique-experiment-name.html#full-and-unique-experiment-name/introduction
  • full-and-unique-experiment-name.html#full-and-unique-experiment-name/materials
Some quick interactive cross file link tests:
When using -S, --split-headers, cross references always point to non-split pages as mentioned at cross reference targets in split headers.
If the splitDefault boolean argument is given however:
  • the split header becomes the default, e.g. index.html is now the split one, and nosplit.html is the non-split one
  • the header it is given for, and all of its descendant headers will use the split header as the default internal cross target, unless the header is already rendered in the current page. This does not propagate across includes however.
For example, consider README.bigb:
= Toplevel
{splitDefault}

\x[h2][toplevel to h2]

\x[notreadme][toplevel to notreadme]

\Include[notreadme]

== h2
and notreadme.bigb:
= Notreadme

\x[h2][notreadme to h2]

\x[notreadme][notreadme to notreadme h2]

== Notreadme h2
Then the following links would be generated:
  • index.html: split version of README.bigb, i.e. does not contain h2
    • toplevel to h2: h2.html. Links to the split version of h2, since h2 is also affected by the splitDefault of its parent, and therefore links to it use the split version by default
    • toplevel to notreadme: notreadme.html. Links to non-split version of notreadme.html since that header is not splitDefault, because splitDefault does not propagate across includes
  • nosplit.html non-split version of README.bigb, i.e. contains h2
    • toplevel to h2: #h2, because even though h2 is splitDefault, that header is already present in the current page, so it would be pointless to reload the split one
    • toplevel to notreadme: notreadme.html
  • h2.html split version of h2 from README.bigb
  • notreadme.html: non-split version of notreadme.bigb
    • notreadme to h2: h2.html, because h2 is splitDefault
    • notreadme to notreadme h2: #notreadme-h2
  • notreadme-split.html: split version of notreadme.bigb
    • notreadme to h2: h2.html, because h2 is splitDefault
    • notreadme to notreadme h2: notreadme.html#notreadme-h2, because notreadme-h2 is not splitDefault
The major application of this if you like work with a huge README.bigb containing thousands of random small topics.
Splitting those into separate source files would be quite laborious, as it would require duplicating IDs on the filename, and setting up includes.
However, after this README reaches a certain size, page loads start becoming annoyingly slow, even despite already loading large assets like images video videos only on hover or click: the annoying slowness comes from the loading of the HTML itself before the browser can jump to the ID.
And even worse: this README corresponds to the main index page of the website, which will make what a large number of users will see be that slowness.
Therefore, once this README reaches a certain size, you can add the splitDefault attribute to it, to make things smoother for readers.
And if you have a smaller, more self-contained, and highly valuable tutorial such as cirosantilli.com/x86-paging, you can just split that into a separate .bigb source file.
This way, any links into the smaller tutorial will show the entire page as generally desired.
And any links from the tutorial, back to the main massive README will link back to split versions, leading to fast loads.
This feature was implemented at: github.com/ourbigbook/ourbigbook/issues/131
Note that this huge README style is not recommended however. Ciro Santilli used to do it, but moved away from it. The currently recommended approach is to manually create not too large subtrees in each page. This way, readers can easily view several nearby sections without having to load a new page every time.
If given, add a custom suffix to the output filename of the header when using -S, --split-headers.
If the given suffix is empty, it defaults to -split.
For example, given:
= my h1

== my h2
a --split-headers conversion would normally place my h2 into a file called:
my-h2.html
However, if we instead wrote:
== my h2
{splitSuffix}
it would not be placed under:
my-h2-split.html
and if we set a custom one as:
== my h2
{splitSuffix=asdf}
it would go instead to:
my-h2-asdf.html
This option is useful if the root of your website is written in OurBigBook, and you want to both:
  • have a section that talks about some other project
  • host the documentation of that project inside the project source tree
For example, cirosantilli.com with source at github.com/cirosantilli/cirosantilli.github.io has a quick section about OurBigBook: cirosantilli.com#ourbigbook.
Therefore, without a custom suffix, the split header version of that header would go to docs.ourbigbook.com, which would collide with this documentation, that is present in a separate repository: github.com/ourbigbook/ourbigbook.
Therefore a splitSuffix property is used, making the split header version fall under /ourbigbook-split, and leaving the nicer /ourbigbook for the more important project toplevel.
If given on the the toplevel headers, which normally gets a suffix by default to differentiate from the non-split version, it replaces the default -split suffix with a custom one.
For example if you had notindex.bigb as:
= Not index
then it would render to:
notindex-split.bigb
but if you used instead:
= Not index
{splitSuffix=asdf}
then it would instead be:
notindex-asdf.bigb
4.2.6.4.9. \H synonym argument
words: 671 articles: 4
This option is similar to \H title2 argument but it additionally:
  • creates a new ID that you can refer to, and renders it with the alternate chosen title
  • the rendered ID on cross references is the same as what it is a synonym for
  • the synonym header is not rendered at all, including in the table of contents
  • when using -S, --split-headers, a redirect output file is generated from the synonym to the main ID
Example:
= Parent

== GNU Debugger
{c}

= GDB
{c}
{synonym}

I like to say \x[gdb] because it is shorter than \x[gnu-debugger].
renders something like:
= GNU Debugger

I like to say \a[#gnu-debugger][GDB] because it is shorter than \x[#gnu-debugger][GNU Debugger].
Furthermore, if -S, --split-headers is used, another file is generated:
gdb.html
which contains a redirection from gdb.html to gnu-debugger.html.
Implemented at: github.com/ourbigbook/ourbigbook/issues/114
4.2.6.4.9.1. \H title argument
words: 417 articles: 1
Contains the main content of the header. The insane syntax:
= My title
is equivalent to the sane:
\H[1][My title]
and in both cases My title is the title argument.
The title argument is also notably used for automatic ID from title.
If a non-toplevel macro has the title argument is present but no explicit id argument is given, an Element ID is created automatically from the title, by applying the following transformations:
  • do a id output format conversion on the title to remove for example any HTML tags that would be present in the conversion output
  • convert all characters to lowercase. This uses JavaScript case conversion. Note that this does convert non-ASCII characters to lowercase, e.g. É to é.
  • if id normalize latin is true (the default) do Latin normalization. This converts e.g. é to e.
  • if id normalize punctuation is true (the default) do Punctuation normalization. This converts e.g. + to plus.
  • convert consecutive sequences of all non a-z0-9 ASCII characters to a single hyphen -. Note that this leaves non-ASCII characters untouched.
  • strip leading or trailing hyphens
Note how those rules leave non-ASCII Unicode characters untouched, except for:
  • capitalization changes wher applicable, e.g. É to é
as capitalization and determining if something "is a letter or not" in those cases can be tricky.
For toplevel headers, see: the ID of the first header is derived from the filename.
So for example, the following automatic IDs would be generated: Table 2. "Examples of automatically generated IDs".
Table 2. Examples of automatically generated IDs.
titleidlatin normalizationpunctuation normalizationcomments
My favorite titlemy-favorite-title
Ciro's markdown is awesomeciro-s-markdown-is-awesome' is an ASCII character, but it is not in a-z0-9, therefore it gets converted to a hyphen -
É你e你true
The Latin acute accented e, É, is converted to its lower case form é as per the JavaScript case conversion.
Then, due to Latin normalization, é is converted to e.
The Chinese character is left untouched as Chinese characters have no case, and no ASCII analogue.
É你é你falseSame as the previous, but é is not converted to e since Latin normalization is turned off.
C++ is greatc-plus-plus-is-greattrueThis is the effect of Punctuation normalization.
I love dogs.i-love-dogslove is extracted from the italic tags <i>love</i> with id output format conversion.
β Centauribeta-centauriOur Latin normalization is amazing and knows Greek!
For the toplevel header, its ID is derived from the basename of the OurBigBook file without extension instead of from the title argument.
TODO:
This conversion type is similar to Automatic ID from title, but it is used in certain cases where we are targeting IDs rather than setting them, notably:
Unlike \H title2 argument, the synonym does not show up by default next to the title. This is because we sometimes want that, and sometimes not. To make the title appear, you can simply add an empty title2 argument to the synonym header as in:
= GNU Debugger
{c}

= GDB
{c}
{synonym}
{title2}

= Quantum computing

= Quantum computer
{synonym}
which renders something like:
= GNU Debugger (GDB)

= Quantum computing
Note how we added the synonym to the title only when it is not just a simple flexion variant, since Quantum computing (Quantum computer) would be kind of useless would be kind of useless.
Same as \x child argument but in the opposite direction, e.g.:
== Mammal

=== Bat
{tag=flying-animal}

=== Cat

== Flying animal
is equivalent in every way to:
== Mammal

=== Bat

=== Cat

== Flying animal
{child=bat}
Naming rationale:
  • parent as the opposite of child is already taken to be then "main parent" via the "\H parent argument"
  • we could have renamed the \H child argument to tags as in "this header tags that one", but it would be a bit confusing tags vs tag
So child vs tag it is for now.
You generally want to use tag instead of the \H child argument because otherwise some very large header categories are going to contain Huge lists of children, which is not very nice when editing.
It is possible to enforce the \H child argument or the \H tag argument in a given project with the lint h-tag option.
The title2 argument can be given to any element that has the title argument.
Its usage is a bit like the description= argument of images, allowing you to add some extra content to the header without affecting its ID.
Unlike description= however, title2 shows up on all full references, including appearances in the table of contents, which make it more searchable.
Its primary use cases are:
  • give acronyms, or other short names names of fuller titles such as mathematical/programming notation
    One primary reason to not use the acronyms as the main section name is to avoid possible ID ambiguities with other acronyms.
  • give the header in different languages
For example, given the OurBigBook input:
= Toplevel

The Toc follows:

== North Atlantic Treaty Organization
{c}
{title2=NATO}

\x[north-atlantic-treaty-organization]

\x[north-atlantic-treaty-organization]{full}
the rendered output looks like:
= Toplevel

The ToC follows:

* North Atlantic Treaty Organization (NATO)

== North Atlantic Treaty Organization (NATO)

North Atlantic Treaty Organization

Section 1. "North Atlantic Treaty Organization (NATO)"
Related alternatives to title2 include:
Parenthesis are added automatically around all rendered title2.
The title2 argument has a special meaning when applied to a header with the \H synonym argument, see \H title2 argument of a synonym header.
When the \H toplevel argument is set, the header and its descendants will be automatically output to a separate file, even without -S, --split-headers.
For example given:
animal.bigb
= Animal

== Vertebrate

=== Dog
{toplevel}

==== Bulldog

== Invertebrate
and if you convert as:
ourbigbook animal.bigb
we get the following output files:
  • animal.html: contains the headers: "Animal", "Vertebrate" and "Invertebrate", but not "Dog" and "Bulldog"
  • dog.html: contains only the headers: "Dog" and "Bulldog"
This option is intended to produce output identical to using includes and separate files, i.e. the above is equivalent to:
animal.bigb
= Animal

== Vertebrate

\Include[dog]

== Invertebrate
dog.bigb
= Dog
{toplevel}

== Bulldog
Or in other words: the toplevel header of each source file gets {toplevel} set implicitly for it by default.
This design choice might change some day. Arguably, the most awesome setup is on in which source files and outputs are completely decoupled. OurBigBook Web also essentially wants this, as ideally we want to store one source per header there in each DB entry. We shall see.
4.2.6.4.13. \H wiki argument
words: 176 articles: 3
If given, show a link to the Wikipedia article that corresponds to the header.
If a value is not given, automatically link to the Wiki page that matches the header exactly with spaces converted to underscores.
Here is an example with an explicit wiki argument:
==== Tiananmen Square
{wiki=Tiananmen_Square}
which looks like:
or equivalently with the value deduced from the title:
= Tiananmen Square
{wiki}
which looks like:
You can only link to subsections of wiki pages with explicit links as in:
= History of Tiananmen Square
{{wiki=Tiananmen_Square#History}}
which looks like:
Note that in this case, you either need a literal argument {{}} or to explicitly escape the # character as in:
= History of Tiananmen Square
{wiki=Tiananmen_Square\#History}
to avoid the creation of an insane topic link with a single word.
Also note that Wikipedia subsections are not completely stable, so generally you would rather want to link to a permalink with a full URL as in:
= Artificial general intelligence
{wiki=https://en.wikipedia.org/w/index.php?title=Artificial_general_intelligence&oldid=1192191193#Tests_for_human-level_AGI}
Note that in this case escaping the # is not necessary because it is part of the insane link that starts at https://.
4.2.6.5. Header metadata section
words: 213 articles: 3
OurBigBook adds some header metadata to the toplevel header at the bottom of each page. This section describes this metadata.
Although the table of contents has a macro to specify its placement, it is also automatically placed at the bottom of the page, and could be considered a header metadata section.
Lists other sections that link to the current section.
E.g. in:
= tmp

== tmp 1

=== tmp 1 1

=== tmp 1 2

\x[tmp-1]

== tmp 2

\x[tmp-1]
the page tmp-1.html would contain a list of incoming links as:
  • tmp-1-2
  • tmp-2
since those pages link to the tmp-1 ID.
Lists sections that are secondary children of the current section, i.e. tagged under the current section.
The main header tree hierarchy descendants already show under the table of contents instead.
E.g. in:
= tmp

== Mammal

== Flying

== Animal

=== Bat
{tag=mammal}
{tag=flying}

=== Bee
{tag=flying}

=== Dog
{tag=mammal}
the tagged sections for:
  • Mammal will contain Bat and Dog
  • Flying will contain Bat and Bee
Shows a list of ancestors of the page. E.g. in:
= Asia

== China

=== Beijing

==== Tiananmen Square

=== Hong Kong
the ancestor lists would be for:
  • Hong Kong: China, Asia
  • Tiananmen Square: Beijing, China, Asia
  • Beijing: China, Asia
  • China: Asia
so we see that this basically provides a type of breadcrumb navigation.

4.2.7. Image (\Image and \image)

words: 2k articles: 19
A block image with capital 'i' Image showcasing most of the image properties Figure 4. "The title of my image".
Have a look at this amazing image: \x[image-my-test-image].

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my image}
{id=image-my-test-image}
{width=600}
{height=200}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
{description=The description of my image.}
which renders as:
Have a look at this amazing image: Figure 4. "The title of my image".
Figure 4. The title of my image. Source. The description of my image.
This exemplifies the following parameters:
  • title: analogous to the \H title argument. Shows up preeminently, and sets a default ID if one is not given. It is recommended that you don't add a period . to it, as that would show in cross references
  • image description argument
  • source: a standardized way to credit an image by linking to a URL that contains further image metadata
For further discussion on the effects of ID see: Section 4.2.7.1. "Image ID".
And this is how you make an inline image inline one with lower case i:
My inline \image[Tank_man_standing_in_front_of_some_tanks.jpg][test image] is awesome.
which renders as:
My inline test image is awesome.
Inline images can't have captions.
And now for an image outside of \OurBigBookExample to test how it looks directly under the \Toplevel implicit macro: Figure 5.
Figure 5
4.2.7.1. Image ID
words: 384 articles: 1
Here is an image without a description but with an ID so we can link to it: Figure 6.
Have a look at this amazing image: \x[image-my-test-image-2].

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{id=image-my-test-image-2}
which renders as:
Have a look at this amazing image: Figure 6.
Figure 6
This works because full is the default cross reference style for Image, otherwise the link text would be empty since there is no title, and OurBigBook would raise an error.
OurBigBook can optionally deduce the title from the basename of the src argument if the titleFromSrc boolean argument is given, or if title-from-src is set as the default media provider for the media type:
Have a look at this amazing image: \x[image-tank-man-standing-in-front-of-some-tanks].

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{titleFromSrc}
which renders as:
Have a look at this amazing image: Figure 7. "Tank man standing in front of some tanks.".
Figure 7. Tank man standing in front of some tanks.
If the image has neither ID nor title nor description nor source, then it does not get a caption at all:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
Tank_man_standing_in_front_of_some_tanks.jpg
If the image does not have an ID nor title, then it gets an automatically generated ID, just like every other OurBigBook output HTML element, and it is possible for readers to link to that ID on the rendered version, e.g. as:
#_123
Note that the 123 is not linked to the Figure <number>., but just a sequential ID that runs over all elements.
This type of ID is of course not stable across document revisions however, since if an image is added before that one, the link will break. So give an ID or title for anything that you expect users to link to.
Also, it is not possible to link to such images with an cross reference, like any other OurBigBook element with autogenerated temporary IDs.
Another issue to consider is that in paged output formats like PDF, the image could float away from the text that refers to the image, so you basically always want to refer to image by ID, and not just by saying "the following image".
We can also see that such an image does not increment the Figure count:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{id=image-my-test-image-count-before}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{id=image-my-test-image-count-after}
which renders as:
If the image has any visible metadata such as source or description however, then the caption does show and the Figure count gets incremented:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{description=This is the description of my image.}
which renders as:
Figure 10. Source.
Figure 11. This is the description of my image.
4.2.7.2. Where to store images
words: 1k articles: 4
If you are making a limited repository that will not have a ton of images, then you can get away with simply git tracking your images in the main repository.
With this setup, no further action is needed. For example, with a file structure of:
./README.bigb
./Tank_man_standing_in_front_of_some_tanks.jpg
just use the image from README.bigb as:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
Tank_man_standing_in_front_of_some_tanks.jpg
However, if you are making a huge tutorial, which can have a huge undefined number of images (i.e. any scientific book), then you likely don't want to git track your images in the git repository.
A generally better alternative is to store images in a separate media repository, and especially store images in a separate media repository and track it as a git submodule.
In this approach, you create a separate GitHub repository in addition to the main one containing the text to contain only media such as images.
This approach is more suitable than store images inside the repository itself if you are going to have a lot of images.
When using this approach, you could of course just point directly to the final image URL, e.g. as in:
\Image[https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png]
which renders as:
https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png
but OurBigBook allows you use configurations that allow you to enter just the image basename: Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png which we will cover next.
In order to get this to work, the recommended repository setup is:
The directory and repository names are not mandatory, but if you place media in data/media and name its repository by adding the *-media suffix, then ourbigbook will handle everything for you without any further configuration in media-providers.
This particular documentation repository does have a different setup as can be seen from its ourbigbook.json. Then, when everything is setup correctly, we can refer to images simply as:
\Image[Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png]{provider=github}
which renders as:
https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png
In this example, we also needed to set {provider=github} explicitly since it was not set as the default image provider in our ourbigbook.json. In most projects however, all of your images will be in the default repository, so this won't be needed.
provider must not be given when a full URL is given because we automatically detect providers from URLs, e.g.:
\Image[https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page.png]{provider=github}
is an error.
TODO implement: ourbigbook will even automatically add and push used images in the my-tutorial-media repository for you during publishing!
You should then use the following rules inside my-tutorial-media:
  • give every file a very descriptive and unique name as a full English sentence
  • never ever delete any files, nor change their content, unless it is an improvement in format that does change the information contained of the image TODO link to nice Wikimedia Commons guideline page
This way, even though the repositories are not fully in sync, anyone who clones the latest version of the *-media directory will be able to view any version of the main repository.
Then, if one day the media repository ever blows up GitHub's limit, you can just migrate the images to another image server that allows arbitrary basenames, e.g. AWS, and just configure your project to use that new media base URL with the media-providers option.
The reason why images should be kept in a separate repository is that images are hundreds or thousands of times larger than hand written text.
Therefore, images could easily fill up the maximum repository size you are allowed: webapps.stackexchange.com/questions/45254/file-size-and-storage-limits-on-github#84746 and then what will you do when GitHub comes asking you to reduce the repository size?
Git LFS is one approach to deal with this, but we feel that it adds too much development overhead.
This is likely the sanest approach possible, as it clearly specifies which media version matches which repository version through the submodule link.
Furthermore, it is possible to make the submodule clone completely optional by setting things up as follows. For your OurBigBook project yourname/myproject create a yourname/myproject-media with the media, and track it as a submodule under yourname/myproject/media.
Then, add to media-providers:
"media-providers": {
  "github": {
    "default-for": ["image", "video"],
    "path": "media",
    "remote": "yourname/myproject-media"
  }
}
Now, as mentioned at media-providers, everything will work beautifully:
Wikimedia Commons is another great possibility to upload your images to:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Gel_electrophoresis_insert_comb.jpg/450px-Gel_electrophoresis_insert_comb.jpg]
{source=https://commons.wikimedia.org/wiki/File:Gel_electrophoresis_insert_comb.jpg}
which renders as:
Figure 12. Source.
OurBigBook likes Wikimedia Commons so much that we automatically parse the image URL and if it is from Wikimedia Commons, automatically deduce the source for you. So the above image renders the same without the source argument:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
which renders as:
Figure 13. Source.
And like for non-Wikimedia images, you can automatically generate a title from the src by setting the titleFromSrc boolean argument or if title-from-src is set as the default media provider for the media type:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
{titleFromSrc}
which renders as:
Figure 14. Gel electrophoresis insert comb. Source.
And a quick test for a more complex thumb resized URL:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Gel_electrophoresis_insert_comb.jpg/450px-Gel_electrophoresis_insert_comb.jpg]
which renders as:
Figure 15. Source.
If you really absolutely want to turn off the source, you can explicitly set:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
{source=}
which renders as:
https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg
but you don't want to do that for the most commonly Wikimedia Commons used license of CC BY+, do you? :-)
Upsides of using Wikimedia Commons for your images:
  • makes it easier for other writers to find and reuse your images
  • automatically generates resized versions of the uploaded images into several common dimensions so you can pick the smallest one that fits your desired image height to reduce bandwidth usage
  • if you have so many images that they would blow even the size of a separate media repository, this will still work
Downsides:
  • forces you to use the Creative Commons license
  • requires the content to be educational in nature
  • uploading a bunch of images to Wikimedia Commons does feel a bit more laborious than it should because you have to write down so much repeated metadata for them
We do this by default because OurBigBook is meant to allow producing huge single page documents like Ciro likes it, and in this way:
  • images that the user is looking at will load first
  • we save a lot of bandwidth for the user who only wants to browse one section
TODO: maybe create a mechanism to disable this for the entire build with ourbigbook.json.
For the love of God, there is no standardized for SVG to set its background color without a rectangle? stackoverflow.com/questions/11293026/default-background-color-of-svg-root-element viewport-fill was just left in limbo?
And as a result, many many many SVG online images that you might want to reuse just rely on white pages and don't add that background rectangle.
Therefore for now we just force white background on our default CSS, which is what most SVGs will work with. Otherwise, you can lose the entire image to our default black background.
Then if someone ever has an SVG that needs another background color, we can add an image attribute to set that color as a local style.
TODO implement: mechanism where you enter a textual description of the image inside the code body, and it then converts to an image, adds to the -media repo and pushes all automatically. Start with dot.
github.com/ourbigbook/ourbigbook/issues/40
4.2.7.6. Image argument
words: 616 articles: 8
Adds a border around the image. This can be useful to make it clearer where images start and end when the image background color is the same as the background color of the OurBigBook document.
\Image[logo.svg]
{border}
{height=150}
{title=Logo of the OurBigBook Project with a border around it.}
which renders as:
Figure 16. Logo of the OurBigBook Project with a border around it.
The description argument similar to the image title argument argument, but allows allowing longer explanations without them appearing in cross references to the image.
For example, consider:
See this image: \x[image-description-argument-test-1].

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=Tank man standing in front of some tanks}
{id=image-description-argument-test-1}
{description=Note how the tanks are green.}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
which renders as:
Figure 17. Tank man standing in front of some tanks. Source. Note how the tanks are green.
In this example, the reference \x[image-description-argument-test-1] expands just to
Tank man standing in front of some tanks
and does not include the description, which only shows on the image.
The description can be as large as you like. If it gets really large however, you might want to consider moving the image to its own header to keep things slightly saner. This will be especially true after we eventually do: github.com/ourbigbook/ourbigbook/issues/180.
If the description contains any element that would take its own separate line, like multiple paragraphs or a list, we automatically add a line grouping the description with the corresponding image to make that clearer, otherwise it can be hard to know which title corresponds to a far away image. Example with multiple paragraphs:
Stuff before the image.

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=Tank man standing in front of some tanks}
{id=image-description-argument-test-2}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
{description=Note how the tanks are green.

But the shirt is white.}

Stuff after the image description.
which renders as:
Stuff before the image.
Figure 18. Tank man standing in front of some tanks. Source.
Note how the tanks are green.
But the shirt is white.
Stuff after the image description.
We recommend adding a period or other punctuation to the end of every description.
Analogous to the \a external argument when checking if the image src argument exists or not.
By default, we fix image heights to height=315, and let the width be calculated proportionally once the image loads. We therefore ignore the actual image size. This is done to:
  • prevent reflows as the page loads images and can determine their actual sizes, especially is the user opens the page at a given ID in the middle of the page
  • create a more uniform media experience by default, unless a custom image size is actually needed e.g. if the image needs to be larger
When the viewport is narrow enough, mobile CSS takes over and forces images to fill 100% of the page width instead.
\Image[logo.svg]
{height=150}
which renders as:
logo.svg
\Image[logo.svg]
{height=550}
which renders as:
logo.svg
Here's a very long test image:
Figure 19. Very long test image. Source. And some tall inline maths: .
If given, make clicking an image go to the specified URL rather than the image's URL as is the default.
By default, clicking on a rendered image links to the URL of the image itself. E.g. clicking:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
Tank_man_standing_in_front_of_some_tanks.jpg
would open Tank_man_standing_in_front_of_some_tanks.jpg as produces img surrounded by something like a href="Tank_man_standing_in_front_of_some_tanks.jpg".
If insetad we want the image to point to a custom URL, e.g. ourbigbook.com we could instead write:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{link=https://ourbigbook.com}
which renders as:
Tank_man_standing_in_front_of_some_tanks.jpg
and now clicking the image leads to ourbigbook.com instead.
Where the image was taken from, e.g.:
\Image[https://upload.wikimedia.org/wikipedia/commons/6/68/Akha_cropped_hires.JPG]
{title=A couple}
{source=https://en.wikipedia.org/wiki/Human}
which renders as:
Figure 20. A couple. Source.
The source is automatically inferred for certain known websites, e.g.:
The address of the image, e.g. in:
\Image[image.png]
the src is image.png.
Analogous to the \a href argument.
Analogous to the \H title argument.

4.2.8. Include (\Include)

words: 612 articles: 25
The \Include macro allows including an external OurBigBook headers under the current header.
It exists to allow optional single page HTML output while still retaining the ability to:
  • split up large input files into multiple files to make renders faster during document development
  • suggest an optional custom output split with one HTML output per OurBigBook input, in order to avoid extremely large HTML pages which could be slow to load
\Include takes one mandatory argument: the ID of the section to be included, much like cross references.
There is however one restriction: only the toplevel headers can be pointed to. This restriction allows us to easily find the included file in the filesystem, and dispenses the need to do a first ./ourbigbook run to generate the ID database. This works because the ID of the first header is derived from the filename.
Headers of the included document are automatically shifted to match the level of the child of the level where they are being included.
If --embed-includes is given, the external document is rendered embedded into the current document directly, essentially as if the source had been copy pasted (except for small corrections such as the header offsets).
Otherwise, the following effects happen:
  • The headers of the included tree appear in the table of contents of the document as links to the corresponding external files.
    This is implemented simply by reading a previously generated database file much like cross file reference internals, which avoids the slowdown of parsing all included files every time.
    As a result, you have to do an initial parse of all files in the project to extract their headers however, just as you would need to do when linking to those headers.
  • the include itself renders as a link to the included document
  • --embed-includes
Here is an example of inclusion of the files not-readme.bigb and not-readme-2.bigb:
\Include[not-readme]
\Include[not-readme-2]
\Include[not-readme-with-scope]
The above is the recommended and slightly insaner version of:
\Include[not-readme]

\Include[not-readme-2]

\Include[not-readme-with-scope]
The insaner version is a bit insaner because the \Include magically discards the following newline node that follows it if it just a plaintext node containing exactly a newline. With a double newline, the newline would already have been previously taken out on the lexing stage as part of a paragraph.
Section 4.2.8.3. "\Include example" shows what those actually render like.
When you are in a subdirectory, include resolution just is simply relative to the subdirectory. E.g. we could do:
subdir/index.bigb
= Subdir

\Include[notindex]
\Include[subdir2/notindex]
subdir/notindex.bigb
= Notindex
subdir/subdir2/notindex.bigb
= Notindex
It is not currently possible to include from ancestor directories: github.com/ourbigbook/ourbigbook/issues/214.
This option is analogous to \H parent argument, but for includes.
For example, consider you have:
= Animal

== Dog

== Cat

== Bat
and now you want to split Cat to cat.bigb.
If you wrote:
= Animal

== Dog

\Include[cat]

== Bat
Cat would be a child of Dog, since that is the previous header, which is not what we want.
Instead, we want to write:
= Animal

== Dog

\Include[cat]{parent=animal}

== Bat
and now Cat will be a child of Animal as desired.
Implemented at: github.com/ourbigbook/ourbigbook/issues/127
4.2.8.3. \Include example
words: 122 articles: 22
This shows what includes render as.
4.2.8.3.1. Not the README
words: 23 articles: 10
This section is present in another page, follow this link to view it.
This section is present in another page, follow this link to view it.
This section is present in another page, follow this link to view it.
This section is present in another page, follow this link to view it.
This section is present in another page, follow this link to view it.
Some \i[italic] text.
which renders as:
Some italic text.
The JsCanvasDemo macro allows you to create interactive HTML/JavaScript canvas demos easily.
These demos:
  • only start running when the user scrolls over them for the first time
  • stop automatically when they leave the viewport
so you can stuff as many of them as you want on a page, and they won't cause the reader's CPU to fry an egg.
\JsCanvasDemo[[
new class extends OurbigbookCanvasDemo {
  init() {
    super.init('hello');
    this.pixel_size_input = this.addInputAfterEnable(
      'Pixel size',
      {
        'min': 1,
        'type': 'number',
        'value': 1,
      }
    );
  }
  draw() {
    var pixel_size = parseInt(this.pixel_size_input.value);
    for (var x = 0; x < this.width; x += pixel_size) {
      for (var y = 0; y < this.height; y += pixel_size) {
        var b = ((1.0 + Math.sin(this.time * Math.PI / 16)) / 2.0);
        this.ctx.fillStyle =
          'rgba(' +
          (x / this.width) * 255 + ',' +
          (y / this.height) * 255 + ',' +
          b * 255 +
          ',255)'
        ;
        this.ctx.fillRect(x, y, pixel_size, pixel_size);
      }
    }
  }
}
]]
which renders as:
new class extends OurbigbookCanvasDemo {
  init() {
    super.init('hello');
    this.pixel_size_input = this.addInputAfterEnable(
      'Pixel size',
      {
        'min': 1,
        'type': 'number',
        'value': 1,
      }
    );
  }
  draw() {
    var pixel_size = parseInt(this.pixel_size_input.value);
    for (var x = 0; x < this.width; x += pixel_size) {
      for (var y = 0; y < this.height; y += pixel_size) {
        var b = ((1.0 + Math.sin(this.time * Math.PI / 16)) / 2.0);
        this.ctx.fillStyle =
          'rgba(' +
          (x / this.width) * 255 + ',' +
          (y / this.height) * 255 + ',' +
          b * 255 +
          ',255)'
        ;
        this.ctx.fillRect(x, y, pixel_size, pixel_size);
      }
    }
  }
}
And another one showing off some WebGL:
new class extends OurbigbookCanvasDemo {
  init() {
    super.init('webgl', {context_type: 'webgl'});
    this.ctx.viewport(0, 0, this.ctx.drawingBufferWidth, this.ctx.drawingBufferHeight);
    this.ctx.clearColor(0.0, 0.0, 0.0, 1.0);
    this.vertexShaderSource = `
#version 100
precision highp float;
attribute float position;
void main() {
  gl_Position = vec4(position, 0.0, 0.0, 1.0);
  gl_PointSize = 64.0;
}
`;

    this.fragmentShaderSource = `
#version 100
precision mediump float;
void main() {
  gl_FragColor = vec4(0.18, 0.0, 0.34, 1.0);
}
`;
    this.vertexShader = this.ctx.createShader(this.ctx.VERTEX_SHADER);
    this.ctx.shaderSource(this.vertexShader, this.vertexShaderSource);
    this.ctx.compileShader(this.vertexShader);
    this.fragmentShader = this.ctx.createShader(this.ctx.FRAGMENT_SHADER);
    this.ctx.shaderSource(this.fragmentShader, this.fragmentShaderSource);
    this.ctx.compileShader(this.fragmentShader);
    this.program = this.ctx.createProgram();
    this.ctx.attachShader(this.program, this.vertexShader);
    this.ctx.attachShader(this.program, this.fragmentShader);
    this.ctx.linkProgram(this.program);
    this.ctx.detachShader(this.program, this.vertexShader);
    this.ctx.detachShader(this.program, this.fragmentShader);
    this.ctx.deleteShader(this.vertexShader);
    this.ctx.deleteShader(this.fragmentShader);
    if (!this.ctx.getProgramParameter(this.program, this.ctx.LINK_STATUS)) {
      console.log('error ' + this.ctx.getProgramInfoLog(this.program));
      return;
    }
    this.ctx.enableVertexAttribArray(0);
    var buffer = this.ctx.createBuffer();
    this.ctx.bindBuffer(this.ctx.ARRAY_BUFFER, buffer);
    this.ctx.vertexAttribPointer(0, 1, this.ctx.FLOAT, false, 0, 0);
    this.ctx.useProgram(this.program);
  }
  draw() {
    this.ctx.clear(this.ctx.COLOR_BUFFER_BIT);
    this.ctx.bufferData(this.ctx.ARRAY_BUFFER, new Float32Array([Math.sin(this.time / 60.0)]), this.ctx.STATIC_DRAW);
    this.ctx.drawArrays(this.ctx.POINTS, 0, 1);
  }
}
Insane with * (asterisk space):
* a
* b
* c
which renders as:
  • a
  • b
  • c
Equivalent saner with implicit ul container:
\L[a]
\L[b]
\L[c]
which renders as:
  • a
  • b
  • c
Equivalent fully sane with explicit container:
\Ul[
\L[a]
\L[b]
\L[c]
]
which renders as:
  • a
  • b
  • c
The explicit container is required if you want to pass extra arguments properties to the ul list macro, e.g. a title and an ID: Ul 1:
\Ul
{id=list-my-id}
[
\L[a]
\L[b]
\L[c]
]
which renders as:
  • a
  • b
  • c
This is the case because without the explicit container in an implicit ul list, the arguments would stick to the last list item instead of the list itself.
It is also required if you want ordered lists:
\Ol[
\L[first]
\L[second]
\L[third]
]
which renders as:
  1. first
  2. second
  3. third
Insane nested list with two space indentation:
* a
  * a1
  * a2
  * a2
* b
* c
which renders as:
  • a
    • a1
    • a2
    • a2
  • b
  • c
The indentation must always be exactly equal to two spaces, anything else leads to errors or unintended output.
Equivalent saner nested lists with implicit containers:
\L[
a
\L[a1]
\L[a2]
\L[a2]
]
\L[b]
\L[c]
which renders as:
  • a
    • a1
    • a2
    • a2
  • b
  • c
Insane list item with a paragraph inside of it:
* a
* I have

  Multiple paragraphs.

  * And
  * also
  * a
  * list
* c
which renders as:
  • a
  • I have
    Multiple paragraphs.
    • And
    • also
    • a
    • list
  • c
Equivalent sane version:
\L[a]
\L[
I have

Multiple paragraphs.

\L[And]
\L[also]
\L[a]
\L[list]
]
\L[c]
which renders as:
  • a
  • I have
    Multiple paragraphs.
    • And
    • also
    • a
    • list
  • c
Insane lists may be escaped with a backslash as usual:
\* paragraph starting with an asterisk.
which renders as:
* paragraph starting with an asterisk.
You can also start insane lists immediately at the start of a positional or named argument, e.g.:
\P[* a
* b
* c
]
which renders as:
  • a
  • b
  • c
And now a list outside of \OurBigBookExample to test how it looks directly under the \Toplevel implicit macro:
Via KaTeX server side, oh yes!
Inline math is done with the dollar sign ($) insane macro shortcut:
My inline $\sqrt{1 + 1}$ is awesome.
which renders as:
My inline is awesome.
and block math is done with two or more dollar signs ($$):
$$
\sqrt{1 + 1} \\
\sqrt{1 + 1}
$$
which renders as:
The sane version of inline math is a lower case m:
My inline \m[[\sqrt{1 + 1}]] is awesome.
which renders as:
My inline is awesome.
and the sane version of block math is with an upper case M:
\M[[
\sqrt{1 + 1} \\
\sqrt{1 + 1}
]]
which renders as:
The capital vs lower case theme is also used in other elements, see: block vs inline macros.
In the sane syntax, as with any other argument, you have to either escape any closing square brackets ] with a backslash \:
My inline \m[1 - \[1 + 1\] = -1] is awesome.
which renders as:
My inline is awesome.
or with the equivalent double open and close:
My inline \m[[1 - [1 + 1] = -1]] is awesome.
HTML escaping happens as you would expect, e.g. < shows fine in:
$$
1 < 2
$$
which renders as:
Equation IDs and titles and linking to equations works identically to images, see that section for full details. Here is one equation reference example that links to the following insane syntax equation: Equation 7. "My first insane equation":
$$
\sqrt{1 + 1}
$$
{title=My first insane equation}
which renders as:
Equation 7. My first insane equation.
and the sane equivalent Equation 8. "My first sane equation":
\M{title=My first sane equation}[[
\sqrt{1 + 1}
]]
which renders as:
Equation 8. My first sane equation.
Here is a raw one just to test the formatting outside of a ourbigbook_comment:
Here is a very long math equation:
4.2.12.1. \M argument
words: 67 articles: 2
See: Section 4.3.3.11.3. "description argument".
See the: <equation Pytogoras theorem>.

$$
c = \sqrt{a^2 + b^2}
$$
{title=Pytogoras theorem}
{description=This important equation allows us to find the distance between two points.}
which renders as:
Equation 11. Pytogoras theorem. This important equation allows us to find the distance between two points.
See: Section 4.3.3.11.2. "title argument".
Example:
See the: <equation Riemann zeta function>.

$$
\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}
$$
{title=Riemann zeta function}
which renders as:
Equation 12. Riemann zeta function.
4.2.12.2. LaTeX macros
words: 381 articles: 4
OurBigBook ships with several commonly used math macros enabled by default.
The full list of built-in macros can be seen at: default.tex.
Here's one example of using \dv from the physics package for derivatives:
$$
\dv{x^2}{x} = 2x
$$
which renders as:
Our goal is to collect the most popular macros from the most popular pre-existing LaTeX packages and make them available with this mechanism.
The built-in macros are currently only available on OurBigBook CLI and OurBigBook Web, not when using the JavaScript API directly. We should likely make that possible as well at some point.
If your project has multiple .bigb input files, you can share Mathematics definitions across all files by adding them to the ourbigbook.tex file on the toplevel directory.
For example, if ourbigbook.tex contains:
\newcommand{\foo}[0]{bar}
then from any .bigb file we in the project can use:
$$
\foo
$$
Note however that this is not portable to OurBigBook Web and will likely never be, as we want Web source to be reusable across authors. So the ony way to avoid macro definition conflicts would be to have a namespace system in place, which sounds hard/impossible.
Ideally, you should only use this as a temporary mechanism while you make a pull request to modify the built-in math macros :-)
Besides using ourbigbook.tex, you can also define your own math macros directly in the source code.
This is generally fragile however because it doesn't work:
If you still want to do it for some reason, first create an invisible block (with {show=0}) defining with a \newcommand definition:
$$
\newcommand{\foo}[0]{bar}
$${show=0}
which renders as:
We make it invisible because this block only contains KaTeX definitions, and should not render to anything.
Then the second math block uses those definitions:
$$
\foo
$$
which renders as:
Analogously with \def, definition:
$$
\gdef\foogdef{bar}
$${show=0}
which renders as:
and the second block using it:
$$
\foogdef
$$
which renders as:
And just to test that {show=1} actually shows, although it is useless, and that {show=0} skips incrementing the equation count:
$$1 + 1$${show=1}
$$2 + 2$${show=0}
$$3 + 3$${show=1}
which renders as:
Shows both the OurBigBook code and its rendered output, e.g.:
\OurBigBookExample[[
Some `ineline` code.
]]
which renders as:
Some `ineline` code.
which renders as:
Some ineline code.
Its input should be thought of as a literal code string, and it then injects the rendered output in the document.
This macro is used extensively in the OurBigBook documentation.
OK, this is too common, so we opted for some insanity here: double newline is a paragraph!
Paragraph 1.

Paragraph 2.
which renders as:
Paragraph 1.
Paragraph 2.
Equivalently however, you can use an explicit \P macros as well, which is required for example to add properties to a paragraph, e.g.:
\P{id=paragraph-1}[Paragraph 1]
\P{id=paragraph-2}[Paragraph 2]
which renders as:
Paragraph 1
Paragraph 2
Paragraphs are created automatically inside macro argument whenever a double newline appears.
Note that OurBigBook paragraphs render in HTML as div with class="p" and not as p. This means that you can add basically anything inside them, e.g. a list:
My favorite list is:
\Ul[
\li[aa]
\li[bb]
]
because it is simple.
which renders as a single paragraph.
One major advantage of this, is that when writing documentation, you often want to keep lists or code blocks inside a given paragraph, so that it is easy to reference the entire paragraph with an ID. Think for example of paragraphs in the C++ standard.
Dumps its contents directly into the rendered output.
This construct is not XSS safe, see: Section 10.2. "unsafe-xss (--unsafe-xss)".
Here for example we define a paragraph in raw HTML:
\passthrough[[
<p>Hello <b>raw</b> HTML!</p>
]]
which renders as:

Hello raw HTML!

And for an inline passthrough:
Hello \passthrough[[<b>raw</b>]] world!
which renders as:
Hello raw world!

4.2.16. Quotation (\Q)

words: 161 articles: 3
With q:
And so he said:

\Q[
Something very smart

And with multiple paragraphs.
]

and it was great.
which renders as:
And so he said:
Something very smart
And with multiple paragraphs.
and it was great.
4.2.16.1. \Q argument
words: 123 articles: 2
See: Section 4.3.3.11.3. "description argument".
Example:
See the: <quote Hamlet what we are>.

\Q[We know what we are, but not what we may be.]
{title=Hamlet what we are}
{description=This quote refers to human's inability to know their own potential, despite understanding their current abilities.}
which renders as:
Quote 1. Hamlet what we are. This quote refers to human's inability to know their own potential, despite understanding their current abilities.
We know what we are, but not what we may be.
See: Section 4.3.3.11.2. "title argument".
Example:
See the: <quote Julius Caesar star>.

\Q[The fault, dear Brutus, is not in our stars, but in ourselves.]
{title=Julius Caesar star}
which renders as:
Quote 2. Julius Caesar star.
The fault, dear Brutus, is not in our stars, but in ourselves.
The insane syntax marks:
  • headers with || (pipe, pipe space) at the start of a line
  • regular cells with | (pipe, space) at the start of a line
  • separates rows with double newline
For example:
|| Header 1
|| Header 2

| 1 1
| 1 2

| 2 1
| 2 2
which renders as:
Header 1Header 2
1 11 2
2 12 2
Empty cells are allowed without the trailing space however:
| 1 1
|
| 1 3

| 2 1
|
| 2 3
which renders as:
1 11 3
2 12 3
Equivalent fully explicit version:
\Table[
\Tr[
  \Th[Header 1]
  \Th[Header 2]
]
\Tr[
  \Td[1 1]
  \Td[1 2]
]
\Tr[
  \Td[2 1]
  \Td[2 2]
]
]
which renders as:
Header 1Header 2
1 11 2
2 12 2
Any white space indentation inside an explicit \Tr can make the code more readable, and is automatically removed from final output due to remove_whitespace_children which is set for \Table.
To pass further arguments to an implicit table such as title or id, you need to use an explicit table macro as in: Table 3. "My table title".
\Table
{title=My table title}
{id=table-my-table}
[
|| Header 1
|| Header 2

| 1 1
| 1 2

| 2 1
| 2 2
]
which renders as:
Table 3. My table title.
Header 1Header 2
1 11 2
2 12 2
We would like to remove that explicit toplevel requirement as per: github.com/ourbigbook/ourbigbook/issues/186 The rules of when the caption shows up or not similar to those of images as mentioned at Section 4.2.7.1.1. "Image caption".
Multiple source lines, including paragraphs, can be added to a single cell with insane syntax by indenting the cell with exactly two spaces just as for lists, e.g.:
|| h1
|| h2
|| h3

  h3 2

| 11
| 12

  12 2
| 13

| 21
| 22
| 23
which renders as:
h1h2
h3
h3 2
11
12
12 2
13
212223
Arbitrarily complex nested constructs may be used, e.g. a table inside a list inside table:
| 00
| 01

  * l1
  * l2

    | 20
    | 21

    | 30
    | 31

| 10
| 11
which renders as:
00
01
  • l1
  • l2
    2021
    3031
1011
And now a table outside of \OurBigBookExample to test how it looks directly under the \Toplevel implicit macro:
Table 4. My table title.
Header 1Header 2
1 11 2
2 12 2
And a fully insane one:
Header 1Header 2
1 11 2
2 12 2
JavaScript interactive on-click table sorting is enabled by default, try it out by clicking on the header row:
|| String col
|| Integer col
|| Float col

| ab
| 2
| 10.1

| a
| 10
| 10.2

| c
| 2
| 3.4
which renders as:
String colInteger colFloat col
ab210.1
a1010.2
c23.4
Powered by: github.com/tristen/tablesort
4.2.17.2. \Table argument
words: 4 articles: 2
See: Section 4.3.3.11.3. "description argument".
See: Section 4.3.3.11.2. "title argument".

4.2.18. Table of contents (ToC)

words: 371 articles: 1
OurBigBook automatically adds a table of contents at the end of the first non-toplevel header of every document.
For example, on a standard document with a single toplevel header:
= Animal

Animals are cute!

== Dog

== Cat
the ToC is rendered something like:
= Animal

Animals are cute!

Table of Contents
* Dog
* Cat

== Dog

== Cat
The ToC ignores the toplevel header if you have one.
For when you want a quick outline of the header tree on the terminal, also consider the --log headers option.
To the left of table of content entries you can click on an open/close icon to toggle the visibility of different levels of the table of contents.
The main use case covered by the expansion algorithm is as follows:
  • the page starts with all nodes open to facilitate Ctrl + F queries
  • if you click on a node in that sate, you close all its children, to get a summarized overview of the contents
  • if you click one of those children, it opens only its own children, so you can interactively continue exploring the tree
The exact behaviour is:
  • the clicked node is open:
    • state 1 all children are closed. Action: open all children recursively, which puts us in state 2
    • state 2: not all children are closed. Action close all children, which puts us in state 1. This gives a good overview of the children, without any children of children getting in the way.
  • state 3: the clicked node is closed (not showing any children). Action: open it to show all direct children, but not further descendants (i.e. close those children). This puts us in state 1.
Note that those rules make it impossible to close a node by clicking on it, the only way to close a node os to click on its parent, the state transitions are:
  • 3 -> 1
  • 1 -> 2
  • 2 -> 1
but we feel that it is worth it to do things like this to cover the main use case described above without having to add two buttons per entry.
Clicking on the link from a header up to the table of contents also automatically opens up the node for you in case it had been previously closed manually.

4.2.19. Video (\Video)

words: 511 articles: 5
Very analogous to images, only differences will be documented here.
In the case of videos, where to store images becomes even more critical since videos are even larger than images, such that the following storage approaches are impractical off the bat:
As a result, then Wikimedia Commons is one of the best options much like for images:
\Video[https://upload.wikimedia.org/wikipedia/commons/8/85/Vacuum_pump_filter_cut_and_place_in_eppendorf.webm]
{id=sample-video-in-wikimedia-commons}
{title=Nice sample video stored in Wikimedia Commons}
{start=5}
which renders as:
Video 4. Nice sample video stored in Wikimedia Commons. Source.
We also handle more complex transcoded video URLs just fine:
\Video[https://upload.wikimedia.org/wikipedia/commons/transcoded/1/19/Scientific_Industries_Inc_Vortex-Genie_2_running.ogv/Scientific_Industries_Inc_Vortex-Genie_2_running.ogv.480p.vp9.webm]
{id=sample-video-in-wikimedia-commons-transcoded}
{title=Nice sample video stored in Wikimedia Commons transcoded}
which renders as:
Video 5. Nice sample video stored in Wikimedia Commons transcoded. Source.
Commons is better than YouTube if your content is on-topic there because:
If your video does not fit the above Wikimedia Commons requirements, YouTube could be a good bet. OurBigBook automatically detects YouTube URLs for you, so the following should just work:
\Video[https://youtube.com/watch?v=YeFzeNAHEhU&t=38]
{id=sample-video-from-youtube-implicit-youtube}
{title=Nice sample video embedded from YouTube implicit from `youtube.com` URL}
which renders as:
Video 6. Nice sample video embedded from YouTube implicit from youtube.com URL. Source.
The youtu.be domain hack URLs also work;
\Video[https://youtu.be/YeFzeNAHEhU?t=38]
{id=sample-video-from-youtube-implicit-youtu-be}
{title=Nice sample video embedded from YouTube implicit from `youtu.be` URL}
which renders as:
Video 7. Nice sample video embedded from YouTube implicit from youtu.be URL. Source.
Alternatively, you can reach the same result in a more explicit and minimal way by setting {provider=youtube} and the start arguments:
\Video[YeFzeNAHEhU]{provider=youtube}
{id=sample-video-from-youtube-explicit}
{title=Nice sample video embedded from YouTube with explicit `youtube` argument}
{start=38}
which renders as:
Video 8. Nice sample video embedded from YouTube with explicit youtube argument. Source.
When the youtube provider is selected, the Video address should only to contain the YouTube video ID, which shows in the YouTube URL for the video as:
https://www.youtube.com/watch?v=<video-id>
Remember that you can also enable the youtube provider by default on your ourbigbook.json with:
"media-provider" {
  "youtube": {"default-for": "video"}
}
But you can also use raw video files from any location that can serve them of course, e.g. here is one stored in this repository: Video 9. "Nice sample video stored in this repository".
\Video[Tank_man_side_hopping_in_front_of_some_tanks.mp4]
{id=sample-video-in-repository}
{title=Nice sample video stored in this repository}
{source=https://www.youtube.com/watch?v=YeFzeNAHEhU}
{start=3}
which renders as:
Video 9. Nice sample video stored in this repository. Source.
And as for images, setting titleFromSrc automatically calculates a title for you:
\Video[Tank_man_side_hopping_in_front_of_some_tanks.mp4]
{titleFromSrc}
{source=https://www.youtube.com/watch?v=YeFzeNAHEhU}
which renders as:
Video 10. Tank man side hopping in front of some tanks. Source.
Unlike image lazy loading, we don't support video lazy loading yet because:
  • non-youtube videos use the video tag which has no loading property yet
  • youtube videos are embedded with iframe and iframe has no loading property yet
Both of this cases could be worked around with JavaScript:
4.2.19.2. \Video argument
words: 21 articles: 3
See: Section 4.3.3.11.3. "description argument".
See: Section 4.3.3.11.2. "title argument".
The time to start playing the video at in seconds. Works for both youtube and non-YouTube videos.

4.2.20. Cross reference (\x macro)

words: 4k articles: 38
Every macro in OurBigBook can have an optional id and many also have a reserved title property.
When a macro in the document has a title argument but no id argument given, get an auto-generated ID from the title: automatic ID from title.
Usually, the most convenient way to write cross references is with the insane syntax with delimited angled braces:
<Cross references> are awesome.
which renders as:
Cross references are awesome.
More details at: insane cross reference.
The sane equivalent to this is:
\x[cross-reference]{c}{p} are awesome section.
which renders as:
Cross references are awesome section.
Note how that is more verbose, especially because here we use both the \x c argument and \x p argument to capitalize and pluraize as desired.
Another sane equivalent would be to add an explicit link body as in:
\x[cross-reference][Cross references] are awesome.
which renders as:
Cross references are awesome.
When you use an insane cross reference (<>) such as in:
<Cross references> are awesome.
which renders as:
Cross references are awesome.
it gets expanded exactly to the sane equivalent:
\x[Cross references]{magic} are alwasome
so we see that the \x magic argument gets added. It is that argument that for example adds the missing -, and removes the pluralization to find the correct ID cross-reference. For more details, see the documentation of the \x magic argument.
Like other insane constructs, insane cross references are exactly equivalent to the sane version, so you can just add other arguments after the construct, e.g.:
<Cross references>{full} are awesome.
which renders as:
which gets converted to exact the same as the sane:
\x[cross-reference]{full} are awesome.
which renders as:
In most cases it is generally more convenient to simply use the \x magic argument through insane cross references instead of the c and p arguments as described on the rest of this section, see also: Section 4.2.20.3. "Inflection vs magic".
A common usage pattern is that we want to use header titles in non-full cross references as the definition of a concept without repeating the title, for example:
== Dog

Cute animal.

\x[cats][Cats] are its natural enemies.

== Cats

This is the natural enemy of a \x[dog][dog].

\x[dog][Dogs] are cute, but they are still the enemy.

One example of a cat is \x[felix-the-cat].

=== Felix the Cat

Felix is not really a \x[cats][cat], just a carton character.
However, word inflection makes it much harder to avoid retyping the definition again.
For example, in the previous example, without any further intelligent behaviour we would be forced to re-type \x[dog][dog] instead of the desired \x[dog].
OurBigBook can take care of some inflection cases for you.
For capitalization, both headers and cross reference macros have the c boolean argument which stands for "capitalized":
  • for headers, c means that the header title has fixed capitalization as given in the title, i.e.
    • if the title has a capital first character, it will always show as a capital, as is the case for most proper noun
    • if it is lower case, it will also always remain lower case, as is the case for some rare proper nouns, notably the name of certain companies
    This means that for such headers, c in the x has no effect. Maybe we should give an error in that case. But lazy now, send PR.
  • for cross reference macros, c means that the first letter of the title should be capitalized.
    Using this option is required when you are starting a sentence with a non-proper noun.
Capitalization is handled by a JavaScript case conversion.
For pluralization, cross reference macros have the p boolean argument which stands for "pluralize":
  • if given and true, this automatically pluralizes the last word of the target title by using the github.com/blakeembrey/pluralize library
  • if given and false, automatically singularize
  • if not given, don't change the number of elements
If your desired pluralization is any more complex than modifying the last word of the title, you must do it manually however.
With those rules in mind, the previous OurBigBook example can be written with less repetition as:
== Dog

Cute animal.

\x[cats]{c} are its natural enemies.

== Cats

This is the natural enemy of a \x[dog].

\x[dog]{p} are cute, but they are still the enemy.

One example of a cat is \x[Felix the Cat].

=== Felix the Cat
{c}

Felix is not really a \x[cats][cat], just a carton character.
If plural and capitalization don't handle your common desired inflections, you can also just create custom ones with the \H synonym argument.
Now for a live example for quick and dirty interactive testing.
\x[inflection-example-not-proper]
which renders as:
\x[inflection-example-not-proper]{c}
which renders as:
\x[inflection-example-not-proper]{full}
which renders as:
\x[inflection-example-proper]
which renders as:
\x[inflection-example-proper]{c}
which renders as:
\x[inflection-example-not-proper-lower]
which renders as:
\x[inflection-example-not-proper-lower]{c}
which renders as:
\x[inflection-example-proper-lower]
which renders as:
\x[not-readme]
which renders as:
\x[not-readme]{c}
which renders as:
\x[inflection-example-not-proper]{p}
which renders as:
\x[inflection-plural-examples]
which renders as:
\x[inflection-plural-examples]{p}
which renders as:
\x[inflection-plural-examples]{p=0}
which renders as:
\x[inflection-plural-examples]{p=1}
which renders as:
\x[not-the-readme-header-with-fixed-case]
which renders as:
The \x magic argument was introduced later, and basically became a better alternative to cross reference title inflection in all but the following cases:
  • \H disambiguate argument: disambiguate prevents the determination of plural inflection, e.g. in:
    = Python
    {disambiguate=animal}
    
    I like <python animal>.
    there is currently no way to make it output Pythons in the plural without resorting to either \x p argument or an explicit content, because if you wrote:
    I like <pythons animal>.
    it would just lead to Id not found, as we would try the plural vs singular on animal only.
    Maybe one day we can implement an even insaner system that understands that parenthesis should skipped for the inflection as in:
    I like <pythons (animal)>.
    github.com/ourbigbook/ourbigbook/issues/244
If you use \x within a title, which most commonly happens for image titles, that can generate complex dependencies between IDs, which would either be harder to implement, or lead to infinite recursion.
To prevent such problems, OurBigBook emits an error if you use an \x without content in the title of one of the following elements:
  • any header. For example, the following gives an error:
    = h1
    {id=myh1}
    
    == \x[myh1]
    This could be solved by either adding a content to the reference:
    = h1
    {id=myh1}
    
    == \x[myh1][mycontent]
    or by adding an explicit ID to the header:
    = h1
    {id=myh1}
    
    == \x[myh1]
    {id=myh2}
  • non-header (e.g. an image) that links to the title of another non-header
    For non-headers, things are a bit more relaxed, and we can link to headers, e.g.:
    = h1
    
    \Image[myimg.jpg]
    {title=my \x[h1]}
    This is allowed because OurBigBook calculates IDs in two stages: first for all headers, and only later non non-headers.
    What you cannot do is link to another image e.g.:
    \Image[myimg.jpg]
    {id=myimage1}
    {title=My image 1}
    
    \Image[myimg.jpg]
    {title=my \x[h1]}
    and there the workaround are much the same as for headers: either explicitly set the cross reference content:
    \Image[myimg.jpg]
    {id=myimage1}
    {title=My image 1}
    
    \Image[myimg.jpg]
    {title=my \x[h1][My image 1]}
    or explicitly set an ID:
    \Image[myimg.jpg]
    {id=myimage1}
    {title=My image 1}
    
    \Image[myimg.jpg]
    {id=myimage2}
    {title=my \x[h1]}
    TODO both workaround are currently broken Image title with x to image with content incorrectly disallowed, we forgot to add a test earlier on, and things inevitably broke... Should not be hard to fix though, we are just overchecking.
While it is technically possible relax the above limitations and give an error only in case of loops, it would require a bit of extra work which we don't want to put in right now: github.com/ourbigbook/ourbigbook/issues/95.
Furthermore, the above rules do not exclude infinite rendering loops, but OurBigBook detects such loops and gives a nice error message, this has been fixed at: github.com/ourbigbook/ourbigbook/issues/34
For example this would contain an infinite loop:
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2]}

\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1]}
This infinite recursion is fundamentally not technically solved: the user has to manually break the loop by providing an x content explicitly, e.g. in either:
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2][my content 2]}

\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1]}
or:
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2]}

\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1][my content 1]}
A closely related limitation is the simplistic approach to \x id output format.
4.2.20.5. Cross file reference
words: 881 articles: 3
Reference to the first header of another file:
\x[not-readme]
which renders as:
Reference to a non-first header of another file:
\x[h2-in-not-the-readme]
which renders as:
To make toplevel links cleaner, if the target header is the very first element of the other page, then the link does not get a fragment, e.g.: \x[not-readme] rendered as:
<a href="not-readme"
and not:
<a href="not-readme#not-readme"
while \x[h2-in-not-the-readme] is rendered with the fragment:
<a href="not-readme#h2-in-not-the-readme"
Reference to the first header of another file that is a second inclusion:
\x[included-by-not-readme]
which renders as:
Reference to another header of another file, with full:
\x[h2-in-not-the-readme]{full}.
which renders as:
Note that when full is used with references in another file in multi page mode, the number is not rendered as explained at: Section 4.2.20.6.4.1. "\x full argument in cross file references".
Reference to an image in another file:
\x[image-not-readme-xi]{full}.
which renders as:
Reference to an image in another file:
\x[image-figure-in-not-the-readme-without-explicit-id]{full}.
which renders as:
Remember that the ID of the toplevel header is automatically derived from its file name, that's why we have to use:
\x[not-readme]
which renders as:
instead of:
\x[not-the-readme]
Reference to a subdirectory:
\x[subdir]

\x[subdir/h2]

\x[subdir/notindex]

\x[subdir/notindex-h2]
which renders as:
Implemented at: github.com/ourbigbook/ourbigbook/issues/116
Reference to an internal header of another file: h2 in not the README. By default, That header ID gets prefixed by the ID of the top header.
When using --embed-includes mode, the cross file references end up pointing to an ID inside the current HTML element, e.g.:
<a href="#not-readme">
rather than:
<a href="not-readme.html/#not-readme">
This is why IDs must be unique for elements across all pages.
When running in Node.js, OurBigBook dumps the IDs of all processed files to a out/db.sqlite3 file in the out directory, and then reads from that file when IDs are needed.
When converting under a directory that contains ourbigbook.json, out/db.sqlite3 is placed inside the same directory as the ourbigbook.json file.
If there is no ourbigbook.json in parent directories, then out/db.sqlite3 is placed in the current working directory.
These follows the principles described at: the current working directory does not matter when there is a ourbigbook.json.
db.sqlite3 is not created or used when handling input from stdin.
When running in the browser, the same JavaScript API will send queries to the server instead of a local SQLite database.
To inspect the ID database to debug it, you can use:
sqlite3 out/db.sqlite3 .dump
It is often useful to dump a single table, e.g. to dump the ids table:
sqlite3 out/db.sqlite3 '.dump ids'
and one particularly important query is to dump a list of all known IDs:
sqlite3 out/db.sqlite3 'select id from ids'
You can force ourbigbook to not use the ID database with the --no-db command line option
This section describes the philosophy of internal cross references.
In many static website generators, you just link to URL specific paths of internal headers.
In OurBigBook, internal cross references point to IDs, not paths.
For example, suppose "Superconductivity" is a descendant of "Condensed Matter Physics", and that the source for both is located at condensed-matter-physics.bigb, so that both appear on the same .html page condensed-matter-physics.html.
When linking to Superconductivity from an external page such as statistical-physics.bigb you write just <superconductivity> and NOT <condensed-matter-physics#superconductivity>. OurBigBook then automatically trakcs where superconductivity is located and produces href="condensed-matter-physics#superconductivity" for you.
This is important because on a static website, the location of headers might change. E.g. if you start writing a lot about superconductivity you would eventually want to split it to its own page, superconductivity.html otherwise page loads for condensed-matter-physics.html would become too slow as that file would become too large.
But if your links read <condensed-matter-physics#superconductivity>, and all links would break when you move things around.
So instead, you simply link to the ID <superconductivity>, and ourbigbook renders links correctly for you wherever the output lands.
When moving headers to separate pages, it is true that existing links to subheaders will break, but that simply cannot be helped. Large pages must be split into smaller ones. The issue can be mitigated in the following ways:
For OurBigBook Web, this is even more important, as we have dynamic article trees, so every header can appear on top.
If you really want to to use scopes, e.g. enforce the ID of "superconductivity" to be "condensed-matter-physics/superconductivity", then you can use the scope feature. However, this particular case would likely be a bad use case for that feature. You want your IDs to be as short as possible, which causes less need for refactoring, and makes topics on OurBigBook Web more likely to have matches from other users.
If the target title argument contains a link from either another cross references or a regular external hyperlink, OurBigBook automatically prevents that link from rendering as a link when no explicit body is given.
This is done because nested links are illegal in HTML, and the result would be confusing.
This use case is most common when dealing with media such as images. For example in:
= afds

\x[image-aa-zxcv-lolol-bb]

== qwer

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=aa \x[zxcv][zxcv] \a[http://example.com][lolol] bb}

== zxcv
the \x[image-aa-zxcv-lolol-bb] renders something like:
<a href="#image-aa-zxcv-lolol-bb">aa zxcv lolol bb</a>
and not:
<a href="#image-aa-zxcv-lolol-bb">aa <a href="zxcv">zxcv</a> <a href="http://example.com">lolol</a> bb</a>
Live example:
This is a nice image: \x[image-aa-zxcv-lolol-bb].

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=aa \x[cross-reference-title-link-removal][zxcv] \a[http://example.com][lolol] bb}
which renders as:
This is a nice image: Figure 22. "aa zxcv lolol bb".
Figure 22. aa zxcv lolol bb.
4.2.20.6. \x arguments
words: 2k articles: 23
Capitalizes the first letter of the target title.
For more details, see: Section 4.2.20.2. "Cross reference title inflection".
4.2.20.6.2. \x child argument
words: 449 articles: 9
Setting the child boolean argument on a cross reference to a header as in:
\x[my-header]{child}
makes that header show up on the list of extra parents of the child.
This allows a section to have multiple parents, e.g. to include it into multiple categories. For example:
= Animal

== Mammal

=== Bat

=== Cat

== Flying animal

These animals fly:
* \x[bat]{child}

These animals don't fly:
* \x[cat]
would render something like:
= Animal

== Mammal

=== Bat (Parent section: Mammal)
(Tags: Flying animal)

=== Cat (Parent section: Mammal)

== Flying animal (Parent section: Animal)

These animals fly:
* \x[bat]

These animals don't fly:
* \x[cat]
so note how "Bat" has a list of tags including "Flying animal", but Cat does not, due to the child.
This property does not affect how the table of contents is rendered. We could insert elements sections there multiple times, but it has the downside that browser Ctrl + F searches would hit the same thing multiple times on the table of contents, which might make finding things harder.
== My title{id=my-id}

Read this \x[my-id][amazing section].
If the second argument, the content argument, is not present, it expand to the header title, e.g.:
== My title{id=my-id}

Read this \x[my-id].
is the same as:
== My title{id=my-id}

Read this \x[my-id][My title].
A live demo can be seen at: \x child argument demo.
Generally, a better alternative to this argument is to use \H child argument.
The term refers to sections that have a parent/child relationship via either of the:
rather than via the usual header hierarchy.
Secondary children show up for example on the tagged metadata section, but not on the table of contents, which is what the header hierarchy already shows.
Secondary children are normally basically used as "tags": a header such as Bat can be a direct child of Mammal, and a secondary child of Flying animal, or vice versa. Both Mammal and Flying animal are then basically ancestors. But we have to chose one main ancestor as "the parent", and other secondary ancestors will be seen as tags.
This option first does ID target from title conversion on the argument, so you can e.g. keep any spaces or use capitalization in the title as in:
= Animal

== Flying animal
{child=Big bat}

== Big bat
TODO the fact that this transformation is done currently makes it impossible to use "non-standard IDs" that contain spaces or uppercase letters. If someone ever wants that, we could maybe add a separate argument that does not do the expansion e.g.:
= Animal

== Flying animal
{childId=Big bat}

== Big bat
{id=Big bat}
but definitely the most important use case is having easier to type and read source with the standard IDs.
4.2.20.6.2.2.1. Animal
words: 10 articles: 6
Oh, and cows are also mammals.
Bats can fly.
But cats can't.
Allows to link to headers with the \H file argument, e.g.:
= My header

Check out this amazing file: <path/to/myfile.txt>{file}

== path/to/myfile.txt
Some live demos follow:
\x[file_demo]{file}
which renders as:
\x[file_demo/file_demo_subdir]{file}
which renders as:
\x[file_demo/file_demo_subdir/hello_world.js]{file}
which renders as:
\x[file_demo/my.bin]{file}
which renders as:
\x[Tank_man_standing_in_front_of_some_tanks.jpg]{file}
which renders as:
\x[https://www.youtube.com/watch?v=YeFzeNAHEhU]{file}
which renders as:
4.2.20.6.4. \x full argument
words: 133 articles: 1
To also show the section auto-generated number as in "Section X.Y My title" we add the optional {full} boolean argument to the cross reference, for example:
\x[x-full-argument]{full}.
which renders as:
{full} is not needed for cross references to most macros besides headers, which use full by default as seen by the default_x_style_full macro property in --help-macros. This is for example the case for images. You can force this to be disabled with {full=0}:
Compare \x[image-my-test-image]{full=0} vs \x[image-my-test-image]{full=1}.
which renders as:
For example in the following cross file reference:
\x[h2-in-not-the-readme]{full}.
which renders as:
we get just something like:
Section "h2 in not the readme"
instead of:
Section 1.2 "h2 in not the readme"
This is because the number "Section 1.2" might already have been used in the current page, leading to confusion.
4.2.20.6.5. \x magic argument
words: 385 articles: 1
This argument makes writing many internal links more convenient, and it was notably introduced because it serves as the sane version of insane cross references.
If given e.g. as in:
= Internal reference

\x[Internal references]{magic}
the link treated magically as follows:
  • content capitalization and pluralization are detected from the string, and implicitly set the \x c argument and \x p argument. In the example:
    • {c} capitalization is set because Internal references starts with an upper case character I
    • {p} pluralization is set because Internal references ends in a plural word
    In this simple example, the content therefore will be exactly Internal references as in the source. But note that this does not necessarily need to be the case, e.g. if we had done:
    \x[Internal Reference]{magic}
    then the content would be:
    Internal reference
    without capital R, i.e. everything except capitalization and pluralization is ignored. This forgiving way of doing things means that writers don't need to remember the exact ideal capitalization of everything, which is very hard to remember.
    It also means that any more complex elements will be automatically rendered as usual, e.g. if we had:
    = \i[Internal] reference
    
    \x[internal reference]{magic}
    then the output would still contain the <i> italic tag.
    If we had a scope as in \x[my scope/Internal references], then each scope part is checked separately. E.g. in this case we would have upper case Internal references, even though my scope is lowercase, and so {c} would be set.
  • the ID is calculated as follows:
    • automatic ID from title conversion is performed, with you exception: forwards slashs / are kept, in order to make scopes work.
      In our case, there aren't any slashes /, so it just gives internal-references. But if instead we had e.g.: \x[my scope/internal reference]{magic}, then we would reach my-scope/internal-reference and not my-scope-internal-reference.
    • if there is a match to an existing ID use it. internal-references in the plural does not match, so go to the next step
    • if the above failed, try singularizing the last word as in the \x p argument with p=0 before doing automatic ID from title conversion. This gives internal-reference, which does exist, and so we use that.
There may be some cases where you might still want to use cross reference title inflection however, see: Section 4.2.20.3. "Inflection vs magic".
A magic link can be created more succinctly by surrounding the link with "angle brackets" (<>), e.g.:
<Partial derivative>
is equivalent to:
\x[Partial derivative]{magic}
The parent argument is exactly like the \x child argument, but it reverses the direction of the parent/child relation.
The ref argument of \x marks the link as reference, e.g.:
Trump said this and that.\x[donald-trump-access-hollywood-tape]{ref}

= Donald Trump Access Hollywood tape
renders something like:
Trump said this and that.<a href="donald-trump-access-hollywood-tape">*</a>
This could currently be replicated without ref by just using:
Trump said this and that.\x[donald-trump-access-hollywood-tape][*]
but later on we might add more precise reference fields like the page of a book or date fetched as Wikipedia supports.
Implemented at: github.com/ourbigbook/ourbigbook/issues/137
If true, then the target of a this link is called a "topic link" and gets treated specially, pointing to an external OurBigBook Web topic rather than a header defined in the current project.
For example, when rendering a static website, a link such as:
\x[Albert Einstein]{topic}
would produce output similar to:
\a[https://ourbigbook.com/go/topic/john-smith][John Smith]
e.g.:
\x[Albert Einstein]{topic}
which renders as:
This allows static website creators to easily link to topics they might not have already written about which others may have covered.
The OurBigBook Web instance linked to can be configured with host.
Those links also work on OurBigBook Web rendering of course, and point to the current Web instance.
If an insane magic link starts with a hash sign (#), then it is converted to a topic link instead of a magic link.
For example:
<#Albert Einstein>
which renders as:
is equivalent to:
\x[Albert Einstein]{topic}
If an insane topic link is made up of a single word then it can be written in the following even succincter notation, without the need for angle brackets:
I like #dogs
which renders as:
I like dogs
is equivalent to:
I like <#dogs>
Word separation is defined analogously to Insane link parsing rules, i.e.:
  • # can start from anywhere, including the middle of words, e.g.:
    abc#mytopic
    which renders as:
    would produce a link immediately preceded by the characters abc.
  • # ends at any insane link termination character, e.g.:
    • Topic is mytopic:
      #mytopic is cool
      which renders as:
      mytopic is cool
    • Topic is mytopic, with the comma:
      #mytopic, is cool
      which renders as:
      mytopic, is cool
      So you likely would have had instead used the sane syntax in this case with <#mytopic>, is cool to avoid that.
Unlike local links, it is not possible to automatically determine the exact pluralization of a topic link because:
  • it would require communicating with the OurBigBook Web API, which we could in principle do, but we would rather not have static builds depend on Web instances
  • topics can be written by multiple authors, and there could be both plural and singular versions of each topic ID, which makes it hard to determine which one is "correct"
Therefore, it is up to authors to specifically specify the desired pluralization of their topic links:
  • by default, topic IDs are automatically singularized, e.g.:
    <#Many Dogs>
    renders something like:
    \a[https://ourbigbook.com/go/topic/many-dog][Many Dogs]
  • to prevent this automatic singularization, use \x p argument with {p=1}, e.g.:
    <#Many Dogs>{p=1}
    renders something like:
    \a[https://ourbigbook.com/go/topic/many-dogs][Many Dogs]
Pluralizes or singularizes the last word of the target title.
For more details, see: Section 4.2.20.2. "Cross reference title inflection".

4.3. OurBigBook Markup syntax

words: 3k articles: 31

4.3.1. Insane macro shortcut

words: 481 articles: 4
Certain commonly used macros have insane macro shortcuts that do not start with backslash (\).
Originally, Ciro wanted to avoid those, but they just feel too good to avoid.
Every insane syntax does however have an equivalent sane syntax.
The style recommendation is: use the insane version which is shorter, unless you have a specific reason to use the sane version.
Insane in our context does not mean worse. It just mean "harder for the computer to understand". But it is more important that humans can understand in the first place! It is find to make the computer work a bit more for us when we are able to.
The insane code and math shortcuts work very analogously and are therefore described together in this section.
The insane inline code syntax:
a `b c` d
which renders as:
a b c d
and is equivalent to the sane:
a \c[[b c]] d
The insane block code:
a

``
b c
``

d
which renders as:
a
b c
d
and is equivalent to the sane:
a

\C[[
b c
]]

d
Insane arguments always work by abbreviating:
  • the macro name
  • one or more of its positional arguments, which are fixed as either literal or non-literal for a given insane construct
This means that you can add further arguments as usual.
For example, an insane code block with an id can be written as:
a `b c`{id=ef} g
because that is the same as:
a \c[b c]{id=ef} g
which renders as:
a
b c
g
So we see that the b c argument is the very first argument of \c.
Extra arguments must come after the insane opening, e.g. the following does not work:
a {id=ef}`b c` g
This restriction things easy to parse for humans and machines alike.
Literal backticks and dollar signs can be produced witha backslash escape as in:
a \` \$ b
which renders as:
a ` $ b
It is not possible to escape backticks (`) inside an insane inline code, or dollar signs ($) in insane math.
The design reason for that is because multiple backticks produce block code.
The upside is that then you don't have to escape anything else, e.g. backslashes (\) are rendered literally.
The only way to do it is to use the sane syntax instead:
a \c[[b ` c]] d

a \m[[\sqrt{\$4}]] d
which renders as:
a b ` c d
a d
Within block code and math, you can just add more separators:
```
code with two backticks
``
nice
```
which renders as:
code with two backticks
``
nice
OurBigBook Markup macro identifiers can consist of the following letters:
  • a-z lowercase
  • A-Z uppercase
  • 0-9
Since underscores _ or hyphens = are not allowed, camel case macro names are recommended, e.g. for \OurBigBookExample we use the name:
OurBigBookExample

4.3.3. Macro argument

words: 2k articles: 24
Every argument in OurBigBook is either positional or named.
For example, in a header definition with an ID:
= My asdf
{id=asdf qwer}
{scope}
which is equivalent to the sane version:
\H[1][My asdf]
{id=asdf qwer}
{scope}
we have:
  • two positional argument: [1] and [My asdf]. Those are surrounded by square brackets [] and have no name
  • two named arguments: {id=asdf qwer} and {scope}.
    The first one has name id, followed by the separator =, followed by the value asdf qwer.
    The separator = always is optional. If not given, it is equivalent to an empty value, e.g.:
    {id=}
    is the same as:
    {id}
You can determine if a macro is positional or named by using --help-macros. Its output contains something like:
  "h": {
    "name": "h",
    "positional_args": [
      {
        "name": "level"
      },
      {
        "name": "content"
      }
    ],
    "named_args": {
      "id": {
        "name": "id"
      }
      "scope": {
        "name": "scope"
      }
    },
and so we see that level and the content argument are positional arguments, and id and scope are named arguments.
Generally, positional arguments are few (otherwise it would be hard to know which is which is which), and are almost always used for a given element so that they save us from typing the name too many times.
The order of positional arguments must of course be fixed, but named arguments can go anywhere. We can even mix positional and named arguments however we want, although this is not advised for clarity.
The following are therefore all equivalent:
\H[1][My asdf]{id=asdf qwer}{scope}
\H[1][My asdf]{scope}{id=asdf qwer}
\H{id=asdf qwer}{scope}[1][My asdf]
\H{scope}[1]{id=asdf qwer}[My asdf]
Just like named arguments, positional arguments are never mandatory.
4.3.3.1.1. Positional argument
words: 121 articles: 2
See: Section 4.3.3.1. "Positional vs named arguments ([...] vs {key=...})".
Most positional arguments will default to an empty string if not given.
However, some positional arguments can have special effects if not given.
For example, an anchor with the first positional argument present (the URL), but not the second positional argument (the link text) as in:
\a[http://example.com]
which renders as:
has the special effect of generating automatic links as in:
\a[http://example.com][http://example.com]
This can be contrasted with named arguments, for which there is always a default value, notably for boolean arguments.
See also: Section 4.2.1. "Link (\a)".
Some positional arguments are required, and if not given OurBigBook reports an error and does not render the node.
This is for example the level of a header.
These arguments marked with the mandatory: true --help-macros argument property.
See: Section 4.3.3.1. "Positional vs named arguments ([...] vs {key=...})".
Name arguments marked in --help-macros as boolean: true must either:
  • take no value and no = sign, in which case the value is implicitly set to 1
  • take value exactly 0 or 1
  • not be given, in which case a custom per-macro default is used. That value is the default from --help-macros, or 0 if such default is not given
For example, the \x full argument of cross references is correctly written as:
\x[boolean-argument]{full}
which renders as:
without the = sign, or equivalently:
\x[boolean-argument]{full=1}
which renders as:
The full=0 version is useful in the case of reference targets that unlike headers expand the title on the cross reference by default, e.g. images:
\x[boolean-argument]{full=1}
which renders as:
The name "boolean argument" is given by analogy to the "boolean attribute" concept in HTML5.
4.3.3.3. Common argument
words: 462 articles: 4
Common arguments are argument names that are present in all macros.
Explicitly sets the ID of a macro.
In OurBigBook Markup, every single macro has an ID, which can be either:
  • explicit: extracted from some input given by the user, either the id argument or the title argument. Explicit IDs can be referenced in Internal cross references and must be unique
  • implicit: automatically generated numerical ID. Implicit IDs cannot be referenced in Internal cross references and don't need to be unique. Their primary application is generating on hover links next to everything you hover, e.g. arbitrary paragraphs.
The most common way to assign an ID is implicitly with automatic ID from title conversion for macros that have a title argument.
The id argument allows to either override the automatic ID from title, or provide an explicit ID for elements that don't have a title argument.
4.3.3.3.2. disambiguate argument
words: 340 articles: 2
Sometimes the short version of a name is ambiguous, and you need to add some extra text to make both its title and ID unique.
For example, the word "Python" could either refer to:
The disambiguate named argument helps you deal more neatly with such problems.
Have a look at this example:
My favorite snakes are \x[python-genus]{p}!

My favorite programming language is \x[python-programming-language]!

\x[python-genus]{full}

\x[python-programming-language]{full}

= Python
{disambiguate=genus}
{parent=disambiguate-argument}

= Python
{c}
{disambiguate=programming language}
{parent=disambiguate-argument}
{title2=.py}
{wiki}
which renders as:
My favorite snakes are pythons!
My favorite programming language is Python!
from which we observe how disambiguate:
  • gets added to the ID after conversion following the same rules as automatic ID from title
  • shows up on the header between parenthesis, much like Wikipedia, as well as in full cross references
  • does not show up on non-full references. This makes it much more likely that you will be able to reuse the title automatically on a cross reference without the content argument: we wouldn't want to say "My favorite programming language is Python (programming language)" all the time, would we?
  • gets added to the default \H wiki argument inside parenthesis, following Wikipedia convention, therefore increasing the likelihood that you will be able to go with the default Wikipedia value
Besides disambiguating headers, the disambiguate argument has a second related application: disambiguating IDs of images. For example:
\x[image-the-title-of-my-disambiguate-image]{full=0}

\x[image-the-title-of-my-disambiguate-image-2]{full=0}

\x[image-the-title-of-my-disambiguate-image]{full}

\x[image-the-title-of-my-disambiguate-image-2]{full}

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my disambiguate image}

\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my disambiguate image}
{disambiguate=2}
which renders as:
Figure 23. The title of my disambiguate image.
Figure 24. The title of my disambiguate image.
Note that unlike for headers, disambiguate does not appear on the title of images at all. It serves only to create an unique ID that can be later referred to. Headers are actually the only case where disambiguate shows up on the visible rendered output. We intend on making this application obsolete however with:
This use case is even more useful when title-from-src is enable by default for the media-providers entry, so you don't have to repeat titles several times over and over.
The JavaScript interface sees arguments as follows:
function macro_name(args)
where args is a dict such that:
  • optional arguments have the key/value pairs explicitly given on the call
  • mandatory arguments have a key documented by the API, and the value on the call.
    For example, the link API names its arguments href and text.
Arguments that are opened with more than one square brackets [ or curly braces { are literal arguments.
In literal arguments, OurBigBook is not parsed, and the entire argument is considered as text until a corresponding close with the same number of characters.
Therefore, you cannot have nested content, but it makes it extremely convenient to write code blocks or mathematics.
For example, a multiline code block with double open and double close square brackets inside can be enclosed in triple square brackets:
A literal argument looks like this in OurBigBook:

\C[[
\C[
A multiline

code block.
]
]]

And another paragraph.
which renders as:
A literal argument looks like this in OurBigBook:
\C[
A multiline

code block.
]
And another paragraph.
The same works for inline code:
The program \c[[puts("]");]] is very complex.
which renders as:
The program puts("]"); is very complex.
Within literal blocks, only one thing can be escaped with backslashes are:
  • leading open square bracket [
  • trailing close square bracket ]
The rule is that:
  • if the first character of a literal argument is a sequence of backslashes (\), and it is followed by another argument open character (e.g. [, remove the first \ and treat the other characters as regular text
  • if the last character of a literal argument is a \, ignore it and treat the following closing character (e.g. ]) as regular text
See the following open input/output pairs:
\c[[\ b]]
<code>\ b</code>

\c[[\a b]]
<code>\a b</code>

\c[[\[ b]]
<code>[ b</code>

\c[[\\[ b]]
<code>\[ b</code>

\c[[\\\[ b]]
<code>\\[ b</code>
and close examples:
\c[[a \]]
<code>a \</code>

\c[[a \]]]
<code>a ]</code>

\c[[a \\]]]
<code>a \]</code>
If the very first or very last character of an argument is a newline, then that character is ignored if it would be part of a regular plaintext node.
For example:
\C[[
a

b
]]
generates something like:
<pre><code>a

b</code></pre>
instead of:
<pre><code>
a

b
</code></pre>
This is extremely convenient to improve the readability of code blocks and similar constructs.
The newline is however considered if it would be part of some insane macro shortcut. For example, we can start an insane list inside a quotations as in:
\Q[
* a
* b
]
which renders as:
  • a
  • b
where the insane list requires a leading newline \n* to work. That newline is not ignored, even though it comes immediately after the \Q[ opening.
The macro name and the first argument, and any two consecutive arguments, can be optionally separated by exactly one newline character, e.g.:
\H
[2]
{scope}
[Design goals]
is equivalent to:
\H[2]{scope}[Design goals]
which is also equivalent to:
\H[2]{scope}
[Design goals]
This allows to greatly improve the readability of long argument lists by having them one per line.
There is one exception to this however: inside an insane header, any newline is interpreted as the end of the insane header. This is why the following works as expected:
== My header 2 `some code`
{id=asdf}
and the id gets assigned to the header rather than the trailing code element.
If the document ends one newline, it is ignored.
If it is two or more, then that generates an error.
Every character that cannot be a macro identifier can be escaped with a backslash \. If you try to escape a macro identifier it of course treats the thing as a macro instead and fails, e.g. in \a it would try to use a macro called \a, not escape the character a.
For some characters, escaping or not does not make any difference because they don't have any meaning to OurBigBook Markup, e.g. currently % is always the exact same as \%.
But in non-literal macro arguments, you have to use a backslash to escape the following if you want them to not have any magical meaning:
Furthermore, only at:
  • at the start of the document
  • after a newline
  • at the start of a new argument
you must also escape the following macros with insane shortcut:
The escape rules for literal arguments are described at: Section 4.3.3.5. "Literal arguments ([[...]] and {{key=...}})".
This is good for short arguments of regular text, but for longer blocks like code blocks or mathematics, you may want to use literal arguments
4.3.3.10. Macro argument property
words: 218 articles: 2
Each macro argument can have certain properties associated to it.
These properties have programmatic effects, and allow users and developers to more easily understand and create new macro arguments.
In HTML, certain elements such as <ul> cannot have any text nodes in them, and any whitespace is ignored, see stackoverflow.com/questions/2161337/can-we-use-any-other-tag-inside-ul-along-with-li/60885802#60885802.
A similar concept applies to OurBigBook, e.g.:
\Ul[
\L[aa]
\L[bb]
]
does not parse as:
\Ul[\L[aa]<NEWLINE>\L[bb]<NEWLINE>]
but rather as:
\Ul[\L[aa]\L[bb]]
because the content argument of ul is marked with remove_whitespace_children and automatically removes any whitespace children (such as a newline) as a result.
This also applies to consecutive sequences of auto_parent macro property macros, e.g.:
\L[aa]
\L[bb]
also does not include the newline between the list items.
The definition of whitespace is the same as the ASCII whitespace definition of HTML5: \r\n\f\t.
By default, arguments can be given only once.
However, arguments with the multiple property set to true can be given multiple times, and each time the argument is given, the new value is appended to a list containing all the values.
An example is the \H child argument.
Internally, multiple is implemented by creating a new level in the abstract syntax tree, and storing each argument separately under a newly generated dummy nodes as in:
AstNode: H
  AstArgument: child
    AstNode: Comment
      AstArgument: content
        AstNode: plaintext
        AstNode: x
    AstNode: Comment
      AstArgument: content
        AstNode: plaintext
        AstNode: x
This section documents ways to classify macro arguments that are analogous to macro argument properties, but which don't yet have clear and uniform programmatic effects and so are a bit more hand wavy for now.
The content argument of macros contains the "main content" of the macro, i.e. the textual content that will show the most proeminently once the macro is rendered. It is usually, but not always, the first positional argument of macros. We should probably make it into an official macro argument property at some point.
In most cases, it is quite obvious which argument is the content argument, e.g.:
Some macros however don't have a content argument, especially when they don't show any textual acontent as their primary rendered output, e.g.:
  • \Image macro: this macro has title byt not content, e.g. as in: \Image[flower.jpg]{title=}, since the primary content is the Image rather than any specific text
Philosophically, the content argument of a macro is analogous to the innerHTML of an HTML tag, as opposed to attributes such as href= and so on. The difference is that in OurBigBook Markup, every macro argument can contain child elements, while in HTML only the innerHTML, but not the attributes, can.
The title argument is an argument that gets used in automatic ID from title calculation of macro IDs.
The title argument currently appears as both positional arguments and named arguments
Examples:
The description argument is similar to the title argument in that it adds information about some block such as an image or code block. The difference from the title is that it does not count toward automatic ID from title calculations.

4.4. OurBigBook Markup concepts

words: 452 articles: 8
These are shared concepts that are used across other sections.
Some sequences of macros such as l from lists and tr from tables automatically generate implicit parents, e.g.:
\Ul[
\L[aa]
\L[bb]
]
parses exactly like:
\L[aa]
\L[bb]
The children are always added as arguments of the content argument of the implicit parent.
If present, the auto_parent macro property determines which auto-parent gets added to those macros.
Every OurBigBook macro is either block or inline:
  • a block macro is one that takes up the entire line when rendered
    All block macros start with a capital letter, e.g. \H for headers.
  • and an inline macro is one that goes inside of a line.
    Every inline macro starts with a lowercase letter e.g. \a for links.
Some macros have both a block and an inline version, and like any other macro, those are differentiated by capitalization:
Certain common URL protocols are treated as "known" by OurBigBook, and when found they have special effects in some parts of the conversion.
The currently known protocols are:
  • http://
  • https://
Effects of known protocols include:
Some parts of OurBigBook use "JavaScript case conversion".
This means that the conversion is done as if by the toLowerCase/toUpperCase functions.
The most important fact about those functions is that they do convert non-ASCII Unicode capitalization, e.g. between É and é:
These conversions are also specified in the Unicode standard.
developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions
If the project toplevel directory of an OurBigBook project is also a git repository, and if git is installed, then the OurBigBook project is said to be a "Git tracked project".

4.4.7. Element ID

words: 182 articles: 1
In general usages of a macro produces an element, and every element has an ID.
IDs must be unique, and they are used as the target of internal cross references.
E.g. due to Section 4.2.6.4.9.1.1. "Automatic ID from title", the elements:
= Animal

== Big dog

I like <big dogs>.
would have IDs respectively:
  • animal
  • big-dog
Such IDs are almost always rendered as HTML IDs as something like:
<h1 id="animal">
<h2 id="big-dog">
and can therefore be linked to in a page with the corresponding fragment:
animal.html#big-dog
IDs that start with an underscore _ are reserved for OurBigBook usage, and will give an error if you try to use them, in order to prevent ID conflicts.
For example:
  • the table of contents uses an ID _toc the ID of the ToC is always fixed to toc. If you try to use that for another element, you will get the following error:
  • elements without an explicit ID may receive automatically generated IDs of type _1, _2 and so on
If you use a reserved ID, you will get an error mesasge of type:
error: tmp.bigb:3:1: IDs that start with "_" are reserved: "_toc"
OurBigBook CLI is the executable program called ourbigbook which comes when you install npm install ourbigbook. It is the main command line utility of the OurBigBook Project.
Its functionality will also be exposed on GUI editor support such as Visual Studio Code to make things nicer for non-technical users.
The main functionalities of the executable are to:
Or if you are a programmer: OurBigBook CLI is a Static Wiki generator that can be invoked from the command line with the ourbigbook executable.
OurBigBook CLI is how cirosantilli.com is published.
OurBigBook Web takes as input the exact same format of OurBigBook Markup files used by OurBigBook CLI. TODO support/improve import/export to/from OurBigBook Web, see also: -W, --web.
The OurBigBook CLI calls the OurBigBook Library to convert each input file.
Convert a .bigb file to HTML and output the HTML to a file with the same basename without extension, e.g.:
ourbigbook hello.bigb
firefox out/html/hello.html
Files named README.bigb are automatically converted to index.html so that they will show on both GitHub READMEs and at the website's base address:
ourbigbook README.bigb
firefox out/html/index.html
Convert all .bigb files in a directory to HTML files, e.g. somefile.bigb to out/html/somefile.html:
ourbigbook .
The HTML output files are placed right next to each corresponding .bigb.
The output file can be selected explicitly with: --outfile <outfie>.
Output to stdout instead of saving it to a file:
ourbigbook --stdout README.bigb
In order to resolve cross file references, this actually does two passes:
  • first an ID extraction pass, which parses all inputs and dumps their IDs to the ID database
  • then a second render pass, which uses the IDs in the ID database
Convert a .bigb file from stdin to HTML and output the contents of <body> to stdout:
printf 'ab\ncd\n' | ourbigbook --body-only
Stdin converion is a bit different from conversion from a file in that it ignores the ourbigbook.json and any other setting files present in the current directory or its ancestors. Also, it does not produce any changes to the ID database. In other words, a conversion from stdin is always treated as if it were outside of any project, and therefore should always produce the same results regardless of the current working directory.

5.2. OurBigBook CLI quick start

words: 706 articles: 4
Learn the syntax basics in 5 minutes: docs.ourbigbook.com/editor.
Play with an OurBigBook template locally:
git clone https://github.com/ourbigbook/template
cd template
npm install
npx ourbigbook .
firefox out/html/index.html
That template can be seen rendered live at: cirosantilli.com/ourbigbook-generate-multifile/ Other templates are documented at: --generate.
To publish to GitHub Pages on your repository you can just fork the repository github.com/ourbigbook/template to your own github.com/johndoe/template and then:
git remote set-url origin git@github.com:johndoe/template.git
npx ourbigbook --publish
and it should now be visible at: johndoe.github.io/template
Then, every time you make a change you can publish the new version with:
git add .
git commit --message 'hacked stuff'
ourbigbook --publish .
or equivalently with the -P, --publish-commit <commit-message> shortcut:
ourbigbook --publish-commit 'hacked stuff'
If you want to publish to your root page johndoe.github.io instead of johndoe.github.io/template you need to rename the master branch to dev as mentioned at publish to GitHub pages root page:
git remote set-url origin git@github.com:johndoe/johndoe.github.io.git

# Rename master to dev, and delete the old master.
git checkout -b dev
git push origin dev:dev
git branch -D master
git push --delete origin master

npx ourbigbook --publish
The following files of the template control the global style of the output, and you are free to edit them:
  • ourbigbook.liquid.html: global HTML template in Liquid format. Available variables are documented at Section 5.5.25. "--template"
  • main.scss: Sass file that gets converted to raw CSS main.css by npx ourbigbook ..
    Sass is just much more convenient to write than raw CSS.
    That file gets included into the global HTML template inside ourbigbook.liquid.html at:
    <link rel="stylesheet" href="{{ root_relpath }}main.css">
When you run:
npx ourbigbook .
it converts all files in the current directory separately, e.g.:
  • README.bigb to out/html/index.html, since README is a magic name that we want to show on the root URL
  • not-readme.bigb to out/html/not-readme.html, as this one is a regular name unlike README
  • main.scss to main.css
If one of the input files starts getting too large, usually the toplevel README.bigb in which you dump everything by default like Ciro does, you can speed up development and just compile files individually with either:
npx ourbigbook README.bigb
npx ourbigbook not-readme.bigb
Note however that when those individual files have a cross file reference to something defined in not-readme.bigb, e.g. via \x[h2-in-not-the-readme], then you must have first previously done pass once with:
npx ourbigbook .
to parse all files and extract all necessary IDs to the ID database. That would be optimized slightly with the --no-render command line option:
npx ourbigbook --no-render .
to only extract the IDs but not render, which speeds things up considerably
When dealing with large files, you might also be interested in the following amazing options:
To produce a single standalone output file that contains everything the viewer needs to correctly see the page do:
npx ourbigbook --embed-resources --embed-includes README.bigb
You can now just give the generated out/html/index.html to any reader and they should be able to view it offline without installing anything. The flags are:
  • --embed-includes: without this, \Include[not-readme] shows as a link to the file out/html/not-readme.html which comes from not-readme.bigb With the flag, not-readme.bigb output gets embedded into the output out/html/index.html directly
  • --embed-resources: by default, we link to CSS and JavaScript that lives inside node_modules. With this flag, that CSS and JavaScript is copied inline into the document instead. One day we will try to handle images that way as well
Install the NPM package globally and use it from the command line for a quick conversion:
npm install -g ourbigbook
printf 'ab\ncd\n' | ourbigbook --body-only
or to a file:
printf 'ab\ncd\n' | ourbigbook > tmp.html
You almost never want to do this except when developing OurBigBook, as it won't be clear what version of ourbigbook the document should be compiled with. Just be a good infant and use OurBigBook with the template that contains a package.json via npx, OK?
Furthermore, the default install of Chromium on Ubuntu 21.04 uses Snap and blocks access to dotfiles. For example, in a sane NVM install, our global CSS would live under /home/ciro/.nvm/versions/node/v14.17.0/lib/node_modules/ourbigbook/_obb/ourbigbook.css, which gets blocked because of the .nvm part:
One workaround is to use --embed-resources, but this of course generates larger outputs.
To run master globally from source for development see: Section 12.1. "Run OurBigBook master". This one actually works despite the dotfile thing since your development path is normally outside of dotfiles.
Try out the JavaScript API with lib_hello.js:
npm install ourbigbook
./lib_hello.js

5.3. Publish your content

words: 260 articles: 1
There are two ways to publish your OurBigBook Project content:
A fundamental design choice of the OurBigBook Project is that, except for bugs, a single OurBigBook Markup source tree can be published in both of those ways without any changes.
The trade-offs between the two options are highlighted at: OurBigBook Web vs static website publishing.
Video 11. Edit locally and publish demo. Source.

5.4. Index file

words: 613 articles: 4
The following basenames are considered "index files":
  • README.bigb
  • index.bigb
Those basenames have the following magic properties:
  • the default output file name for an index file in HTML output is either:
    • index.html when in the project toplevel directory. E.g. README.bigb renders to index.html. Note that GitHub and many other static website hosts then automatically hide the index.html part from the URL, so that your README.bigb hosted at http://example.com will be accessible simply under http://example.com and not http://example.com/index.html
    • the name of the subdirectory in which it is located when not in the project toplevel directory. E.g. mysubdir/index.bigb outputs to mysubdir.html
      Previously, we had placed the output in mysubdir/index.html, but this is not as nice as it makes GitHub pages produce URLs with a trailing slash as mysubdir/, which is ugly, see also: stackoverflow.com/questions/5948659/when-should-i-use-a-trailing-slash-in-my-url
  • the default toplevel header ID of an index files is derived from the parent directory basename rather than from the source file basename

5.4.1. Project toplevel directory

words: 461 articles: 3
This directory is determined by first checking the presence of a ourbigbook.json file.
If a ourbigbook.json is found, then the project toplevel directory is the directory that contains that file.
For example, consider the file following file structure relative to the current working directory:
path/to/notindex.bigb
In this case:
  • if there is no ourbigbook.json file:
    • if we run ourbigbook .: the toplevel directory is the current directory ., and so notindex.bigb has ID path/to/notindex
    • if we run ourbigbook path: same
    • if we run ourbigbook path/to: same
    • if we run ourbigbook path/to/notindex.bigb: same
  • if there is a path/ourbigbook.json file:
    • if we run ourbigbook .: the toplevel directory is the current directory . because the ourbigbook.json is below the entry point and is not seen, and so notindex.bigb has ID path/to/notindex
    • if we run ourbigbook path: the toplevel directory is the directory with the ourbigbook.json, path, and so notindex.bigb has ID to/notindex
    • if we run ourbigbook path/to: same
    • if we run ourbigbook path/to/notindex.bigb: same
5.4.1.1. The toplevel index file
words: 249 articles: 2
This is the index file present in the project toplevel directory.
Being the toplevel index file has the following implications compared to other index files:
The "index article" is the first article of the The toplevel index file. E.g. in:
README.bigb
= John Smith's Homepage

== I like dogs
then "John Smith's Homepage" is the index article of the project, but "I like dogs" is not.
When the file or directory being converted has an ancestor directory with a ourbigbook.json file, then your current working directory does not have any effect on OurBigBook output. For example if we have:
/project/ourbigbook.json
/project/README.bigb
/project/subdir/README.bigb
then all of the following conversions produce the same output:
  • directory conversion:
    • cd /project && ourbigbook .
    • cd / && ourbigbook project
    • cd project/subdir && ourbigbook ..
  • file conversion:
    • cd /project && ourbigbook README.bigb
    • cd / && ourbigbook project/README.bigb
    • cd project/subdir && ourbigbook ../README.bigb
When there isn't a ourbigbook.json, everything happens as though there were an empty ourbigbook.json file in the current working directory. So for example:
  • outputs that would be placed relative to inputs are still placed in that place, e.g. README.bigb -> index.html always stay together
  • outputs that would be placed next to the ourbigbook.json are put in the current working directory, e.g. the out directory
Internally, the general philosophy is that the JavaScript API in index.js works exclusively with paths relative to the project toplevel directory. It is then up to callers such as ourbigbook to ensure that filesystem specifics handle the relative paths correctly.

5.5. OurBigBook CLI options

words: 6k articles: 61
Check the database for consistency, e.g. duplicated IDs. Don't do anything else, including ID extraction, which must have been done previously.
The initial use case was for usage in Parallel builds.

5.5.2. --china

words: 43
This is the most important option of the software.
It produces a copy of the HTML of cirosantilli.com/china-dictatorship to stdout.
The data is stored inside an NPM package, making it hard to censor that information, see also: cirosantilli.com/china-dictatorship#mirrors
Usage:
ourbigbook --china > china.html
firefox china.html

5.5.3. --dry-run

words: 95 articles: 1
The --dry-run option is a good way to debug the --publish option, as it builds the publish output files without doing any git commands that would be annoying to revert. So after doing:
ourbigbook --dry-run --publish .
you can just go and inspect the generated HTML to see what would get pushed at:
cd out/publish/out/publish/
see also: the out directory.
Inspiration: github.com/cirosantilli/linux-kernel-module-cheat/tree/6d0a900f4c3c15e65d850f9d29d63315a6f976bf#dry-run-to-get-commands-for-your-project
Similar to --dry-run, but it runs all git commands except for git push, which gives a clearer idea of what --publish would actually do including the git operations, but without publishing anything:
./ourbigbook --dry-run --publish .
Makes includes render the included content in the same output file as the include is located, instead of the default behaviour of creating links.
For example given:
README.bigb
= Index

\Include[notindex]
notindex.bigb
= Notindex

A paragraph in notindex.

== Notindex 2
then for conversion with:
ourbigbook --embed-includes README.bigb
then the output index.html contains an output equivalent to if your input file were:
= Index

== Notindex

A paragraph in notindex.

=== Notindex 2
Note that a prior ID extraction pass is not required, --embed-includes just makes \Include read files as they are found in the source.
Note that a
In addition to this:
  • cross file references are disabled, and the cross file ID database does not get updated.
    It should be possible to work around this, but we are starting with the simplest implementation that forbids it.
    The problem those cause is that the IDs of included headers show as duplicate IDs of those in the ID database.
    This should be OK to start with because the more common use case with --html-single-page is that of including all headers in a single document. TODO: this option is gone.
Otherwise, include only adds the headers of the other file to the table of contents of the current one, but not the body of the other file. The ToC entries then point to the headers of the included external files.
You may want to use this option together with --embed-resources to produce fully self-contained individual HTML files for your project.
Embed as many external resources such as images and CSS as possible into the HTML output files, rather than linking to external resources.
For example, when converting a simple document to HTML:
index.bigb
= Index

My paragraph.
with:
ourbigbook index.bigb
the output contains references to where OurBigBook is installed in our local filesystem:
<style>
@import "/home/ciro/bak/git/ourbigbook/_obb/ourbigbook.css";
</style>
<script src="/home/ciro/bak/git/ourbigbook/_obb/ourbigbook_runtime.js"></script>
The advantage of this is that we don't have to duplicate this for every single file. But if you are giving this file to someone else, they would likely not have those files at those exact locations, which would break the HTML page.
With --embed-resources, the output contains instead something like:
<style>/*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */html{ [[ ... A LOT MORE CSS ... ]]</style>
<script>/*! For license information please see ourbigbook_runtime.js.LICENSE.txt */ !function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e() [[ ... A LOT MORE JAVASCRIPT ... ]]</script>
This way, all the required CSS and JavaScript will be present in the HTML file itself, and so readers will be able to view the file correctly without needing to install any missing dependencies.
The use case for this option is to produce a single HTML file for an entire build that is fully self contained, and can therefore be given to consumers and viewed offline, much like a PDF.
Examples of embeddings done:
Examples of embedding that could be implemented in the future:
  • images are downloaded if needed and embedded as data: URLs.
    Doing this however has a downside: it would slow the page loading down. The root problem is that HTML was not designed to contain assets, and notably it doesn't have byte position indices that can tell it to skip blobs while parsing, and how to refer to them later on when they show up on the screen. This is kind of why EPUB exists: github.com/ourbigbook/ourbigbook/issues/158
    Images that are managed by the project itself and already locally present, such as those inside the project itself or due to media-providers usually don't require download.
    For images linked directly from the web, we maintain a local download cache, and skip downloads if the image is already in the cache.
    To re-download due to image updates, use either:
    • --asset-cache-update: download all images such that the local disk timestamp is older than the HTTP modification date with If-Modified-Since
    • --asset-cache-update-force: forcefully redownload all assets
Keep in mind that certain things can never be embedded, e.g.:
  • YouTube videos, since YouTube does not offer any download API
Always render all selected files, irrespectively of if they are known to be outdated or not.
OurBigBook stores the timestamp of the last successful ID extraction step for each file.
For ID extraction, we always skip the extraction if the filesystem timestamp of a source file is older than the last successful extraction.
For render:
  • we mark output files as outdated when the corresponding source file is parsed
  • we also skip rendering non-outdated files by default when you invoke ourbigbook on a directory, e.g. ourbigbook ., as this greatly speeds up the interactive error fixing turnaround time
  • we always re-render fully when you specify a single file, e.g. ourbigbook path/to/README.bigb
However, note that skipping renders, unlike for ID extraction, can lead to some outdated pages.
This option disables the timestamp skip for rendering, so ensure that you will get a fully clean updated render.
E.g. consider if you had two files:
file1.bigb
= File 1

== File 1 1
file2.bigb
= File 2

== File 2 1

\x[file-1-1]
We then do the initial conversion:
ourbigbook .
we see output like:
extract_ids file1.bigb
extract_ids file1.bigb finished in 45.61287499964237 ms
extract_ids file2.bigb
extract_ids file2.bigb finished in 15.163879998028278 ms
render file1.bigb
render file1.bigb finished in 23.21016100049019 ms
render file2.bigb
render file2.bigb finished in 25.92908499762416 ms
indicating full conversion without skips.
But then if we just modify fil1.bigb as:
= File 1

== File 1 1 hacked
{id=file-1-1}
the following conversion with ourbigbook . would look like:
extract_ids file1.bigb
extract_ids file1.bigb finished in 45.61287499964237 ms
extract_ids file2.bigb
extract_ids file2.bigb skipped by timestamp
render file1.bigb
render file1.bigb finished in 41.026930000633 ms
render file2.bigb
render file2.bigb skipped by timestamp
and because we skipped file2.bigb render, it will still have the outdated "File 1 1" instead of "File 1 1 hacked".
We could in principle solve this problem by figuring out exactly which files need to be changed when a given ID changes, and we already have to solve a similar problem due to query bundling. Also, this will need to be done sonner or later for the OurBigBook Web. But lazy now: github.com/ourbigbook/ourbigbook/issues/207, this is hard stuff.
Parse and overwrite the local .bigb OurBigBook Markup input source files with the recommended code format. E.g.:
ourbigbook README.bigb
overwrites README.bigb with the recommended formatting, and:
ourbigbook .
does that for every single file in the current directory.
This option uses the bigb output format.
In order to reach a final stable state, you might need to run the conversion twice. This is not ideal but we don't have the patience to fix it. The reason is that links in image titles may expand twice. This is the usual type of two level recursion that has caused much more serious problems, see e.g. \x within title restrictions. E.g. starting with:
<image my big dog>

\Image[image.png]{title=My <big dog>}

= Big dog
the first conversion leads to uppercasing inside the image title:
<image my big dog>

\Image[image.png]{title=My <big Dog>}

= Big Dog
and the second one to uppercasing the reference to the image title:
<image my big Dog>

\Image[image.png]{title=My <big Dog>}

= Big Dog
Generate one of the template repositories locally:
End users almost never want this, because it means that to have a sane setup you need to:
  • install OurBigBook globally with npm install -g ourbigbook
  • generate the template
  • then install OurBigBook locally again with npm install
so maybe we should just get rid of that option and just ensure that we can provide an up-to-date working template for the latest relase.
For now we are keeping this as it is useful to automate the updating of templates during the release procedure.
You can get an overview of all macros in JSON format with:
ourbigbook --help-macros

5.5.10. --log

words: 523 articles: 2
Give multiple times to enable a list of certain types of logs to stderr help debugging, e.g.:
./ourbigbook --log ast tokens -- README.bigb
Note that this follows commander.js' insane variadic argumentso syntax, and thus the -- is required above. If you want to omit it for a single value you have to add the = sign as in:
./ourbigbook --log=ast README.bigb
Values not documented in other sections:
  • ast: the full final parsed abstract syntax tree as JSON
  • ast-simple: a simplified view of the abstract syntax tree with one AstNode or AstArgument per line and showing only the most important fields
  • ast-pp-simple: view snapshots of the various abstract syntax tree post process stages, more info at: conversion process overview
  • ast-inside: print the AST from inside the ourbigbook.convert call before it returns.
    This is useful to debug the program if ourbigbook.convert blows up on the next stages before returning.
  • db: show database transactions done by OurBigBook, to help debug stuff like cross file references
  • mem: show process memory usage as per Node.js' process.memoryUsage() after each --log perf step: stackoverflow.com/questions/12023359/what-do-the-return-values-of-node-js-process-memoryusage-stand-for. Implies --log perf.
    To use this options, you must run OurBigBook with the --expose-gc command line option, e.g. with:
    node --expose-gc $(which ourbigbook) myfile.bigb
  • parse: parsing steps
  • tokenize: tokenization steps
  • tokens: final parsed token stream
  • tokens-inside: like ast-inside but for tokens.
    Also adds token index to the output, which makes debugging the parser way easier.
This nifty little option outputs to stderr what the header graph looks like!
It is a bit like a table of contents in your terminal, for when you need to have a look at the outline of the document to decide where to place a new header, but are not in the mood to open a browser or use the browser editor with preview.
Sample output excerpt for this document:
= h1  ourbigbook
== h2 1 quick-start
== h2 2 design-goals
=== h3 2.1 saner
=== h3 2.2 more-powerful
== h2 3 paragraphs
== h2 4 links
This option can also serve as a debug tool for header tree related features (confession: that was its original motivation!).
TODO
print performance statistics to stderr, for example
./ourbigbook --log=perf README.bigb
could output:
perf start: 181.33060800284147
perf tokenize_pre: 181.4424349963665
perf tokenize_post: 318.333980999887
perf parse_start: 319.1866770014167
perf post_process_start: 353.5477180033922
perf post_process_end: 514.1527540013194
perf render_pre: 514.1708239987493
perf render_post: 562.834307000041
perf end: 564.0349840000272
perf convert_input_end 566.1234430000186
perf convert_path_pre_sqlite 566.1564619988203
perf convert_path_pre_sqlite_transaction 566.2528780028224
perf convert_path_post_sqlite_transaction 582.256645001471
perf convert_path_end 582.3469280004501
which shows how long different parts of the conversion process took to help identify bottlenecks.
This option can also be useful to mark phases of the conversion to identify from which phase other logs are coming from, e.g. if we wanted to know which part of the conversion is making a ton of database requests we could run:
ourbigbook --log db perf -- README.bigb
and we would see the database requests made at each conversion phase.
Note that --log perf currently does not take sub-converts into account, e.g. include and \OurBigBookExample both call the toplevel conversion function convert, and therefore go through all the conversion intervals, but we do not take those it account, and just dump them all into the same toplevel interval that they happen in, currently between post_process_start and post_process_end.
Skip the database sanity check that is normally done after the ID extraction step.
This was originally added to speed up, originally added to speed up the web upload development loop, when we knew that there were no errors in the database after a local conversion, and wanted to get to the upload phase faster, but the DB check can take several seconds for a large input.
But it then later also found usage with Parallel builds followed by a --check-db-only.

5.5.12. --no-db

words: 28
Don't use the ID database during this run. This implies that the on-disk database is not read, and also not written to. Instead, a temporary clean in-memory database is used.
If not given, cross references render with the .html extension as in:
<a href=not-readme.html#h2-in-not-the-readme>
This way, those links will work when rendering locally to .html files which is the default behaviour of:
ourbigbook .
If given however, the links render without the .html as in:
<a href=not-readme#h2-in-not-the-readme>
which is what is needed for servers such as GitHub Pages, which automatically remove the .html extension from paths.
This option is automatically implied when publishing to targets that remove the .html extension such as GitHub pages.
Only extract IDs to fill the ID database, don't render. This saves time if you only want to render a single file which has references to other files without getting any errors.
Same as --no-render, but for the -W, --web upload stage.
Web upload consists of two stages:
  • extract local ids and render to split ourbigbook files. This can be disabled with --no-render
  • upload to web first on an ID extraction pass, and then a render pass. --no-web-render skips that render pass
Set a custom output directory for the conversion.
If not given, the project toplevel directory is used.
Suppose we have an input file ./test.bigb. Then:
ourbigbook --outdir my_outdir test.bigb
places its output at:
my_outdir/test.html
The same would happen if we instead did a full directory conversion as in:
ourbigbook --outdir my_outdir .
The output would also be placed in my_outdir/test.html.
This option also relocates the out directory to the target destination, e.g.:
ourbigbook --outdir my_outdir test.bigb
would generate:
my_outdir/out
This means that the source tree remains completely clean, and every output and temporary cache is put strictly under the selected --outdir.
Save the output to a given file instead of outputting to stdout:
./ourbigbook --outfile not-readme.html not-readme.bigb
The generated output is slightly different than that of:
./ourbigbook not-readme.bigb > not-readme.html
because with --outfile we know where the output is going, and so we can generate relative includes to default CSS/JavaScript files.
Default: html output format.
The default output format. Web pages!!!
Outputs as OurBigBook Markup, i.e. the same format as the input itself!
While using -O bigb is not a common use case, the existence of this format has the following applications:
  • automatic source code formatting e.g. with --format-source. The recommended format, including several edge cases, can be seen in the test file test_bigb_output.bigb, which should be left unchanged by a bigb conversion.
  • manipulating source code on OurBigBook Web to allow editing either individual sections separatelly, or multiple sections at once
  • this could be adapted to allows us to migrate updates with breaking changes to the source code more easily. Alternatively on OurBigBook Web, we might just start storing the AST instead of source, and just rendering the source whenever users want to edit it.
Can be tested interactively with:
ourbigbook --no-db -O bigb --stdout --log=ast-simple test_bigb_output.bigb
One important property of the bigb conversion is that is must not alter the AST, and therefore neither the final output, in any way.
One good test is:
ourbigbook README.bigb &&
mv out/html/index.html out/html/old.html &&
ourbigbook --format-source README.bigb  &&
ourbigbook README.bigb &&
diff -u out/html/old.html out/html/index.html
This was tracked at: github.com/ourbigbook/ourbigbook/issues/83
5.5.18.3. id output format
words: 398 articles: 3
This output format is used an intermediate step in automatic ID from title, that unlike the regular HTML output does not have any tags.
It does not have serious applications to end users. We decided to expose it from the CLI mostly for fun, as it posed no extra work at all as it is treated internally exactly like any other conversion format.
The id output format conversion is very simplistic: it basically just extracts the content argument of most macros.
An important exception to that behaviour is the first argument of the \x macro: see \x id output format.
For example, converting:
\i[asdf]
with the id output format produces simply:
asdf
instead of the HTML output:
<i>asdf</i>
This conversion type is useful in situations that users don't expect conversion to produce any HTML tags. For example, you could create a header:
= My \i[asdf]
and then following the automatic ID from title algorithm, that header would have the more commonly desired ID my-asdf, and not my-<i>asdf</i> or my-i-asdf-i.
Similarly, any macro argument that references an ID undergoes id output format conversion. E.g. the above header could be referenced by:
<My \i[asdf]>
which is equivalent to:
\x[my-asdf]
Besides being more intuitive, this conversion also guarantees greater format portability, in case we ever decide to support other output formats besides HTML!
Macros that don't have a content argument are just completely removed, i.e. typically non-textual macros such as images. We could put effort in outputting their title argument correctly, but meh, not worth the effort.
The id output format also serves as a good start generalizing OurBigBook to multiple outputs, as this is a simple format.
\x uses href if the content is not given explicitly.
Previously, if \x didn't have a content, we were actually rendering the \x to calculate the ID. But then we noticed that doing so would require another parse pass, so we just went for this simpler approach. This is closely linked to \x within title restrictions.
For example in:
= Animal

\x[image-i-like-dog]

\Image[dog.jpg]
{title=I \i[like] \x[dog]}

== Dog hacked
{id=dog}
If you wanted image-i-like-dog-hacked instead, you would need to explicitly give it as in:
= Animal

\x[image-i-like-dog-hacked]

\Image[dog.jpg]
{title=I like \x[dog][dog hacked]}

== Dog hacked
{id=dog}
For similar reasons as the above, {p} inflection with the \x p argument is not considered either, e.g. you would have:
= Animal

\x[image-i-like-dog]

\Image[dog.jpg]
{title=I like \x[dog]{p}}

== Dog
and not:
\x[image-i-like-dogs]
This can however be worked around with the \x magic argument as in;
= Animal

\x[image-i-like-dogs]

\Image[dog.jpg]
{title=I like <dogs>}

== Dog
TODO: github.com/ourbigbook/ourbigbook/issues/38
One day, one day. Maybe.
OurBigBook tooling is so amazing that we also take care of the HTML publishing for you!
Once a publish target is properly setup, all you have to do is run:
git add README.bigb
git commit -m 'more content!'
ourbigbook --publish
and your changes will be published to the default target specified in ourbigbook.json.
If not specified, e.g. with the the --publish-target option, the default target is to publish to GitHub Pages.
Only changes committed to Git are pushed.
Files that ourbigbook knows how to process get processed and only their outputs are added to the published repo, those file types are:
  • .bigb files are converted to .html
  • .scss files are converted to .css
Every other Git-tracked file is pushed as is.
When --publish is given, stdin input is not accepted, and so the current directory is built by default, i.e. the following two are equivalent:
./ourbigbook --publish
./ourbigbook --publish .
Publishing only happens if the build has no errors.
Like the --publish option, but also automatically:
  • git add -u to automatically add change to any files that have been previously git tracked
  • git commit -m <commit-message> to create a new commit with those changes
This allows you to publish your changes live in a single command such as:
ourbigbook --publish-commit 'my amazing change' .
With great power comes great responsibility of course, but who cares!
Attempt to publish without converting first. Implies the --publish option.
This can only work if there was previously a successful publish conversion done, which later failed to publish during the following steps, e.g. due to a network error.
This option was introduced for debugging purposes to help get the git commands right for large conversions that took a look time.

5.5.22. --publish-target

words: 287 articles: 6
What type of target to publish for. The generated output of each publish target is stored under:
out/publish/out/<target>
e.g.:
out/publish/out/local
Publish to GitHub Pages. See also: Section 5.5.22.3. "Publish to GitHub Pages".
Publish as a local directory that can be zipped and sent to someone else, and then correctly viewed by a browser locally by the receiver. You can then zip it from the Linux command line for example with:
cd out/publish/out
zip -r local.zip local
Maybe we should do the Zip step from the OurBigBook CLI as well. There is no Node.js standard library wrapper however apparently: stackoverflow.com/questions/15641243/need-to-zip-an-entire-directory-using-node-js
5.5.22.3. Publish to GitHub Pages
words: 197 articles: 3
GitHub pages is the default OurBigBook publish target.
Since that procedure is so important, it is documented directly at: play with the template.
If you want to publish your root user page, which appears at / (e.g. github.com/cirosantilli/cirosantilli.github.io for the user cirosantilli), GitHub annoyingly forces you to use the master branch for the HTML output:
This means that you must place your .bigb input files in a branch other than master to clear up master for the generated HTML.
ourbigbook automatically detects if your repository is a root repository or not by parsing git remote output, but you must setup the branches correctly yourself.
So on a new repository, you must first checkout to a different branch as in:
git init
git checkout -b dev
or to move an existing repository to a non-master branch:
git checkout -b dev
git push origin dev:dev
git branch -D master
git push --delete origin master
You then will also want to set your default repository branch to dev in the settings for that repository: help.github.com/en/github/administering-a-repository/setting-the-default-branch
It's a GitHub bug/feature: github.com/orgs/community/discussions/52252
Maybe we should just ignore the .github directory when publishing, otherwise it leads to a broken link on the _dir directory listings.
TODO find some upstream discussion.
Split each header into its own separate HTML output file.
This option allows you to keep all headers in a single source file, which is much more convenient than working with a billion separate source files, and let them grow naturally as new information is added, but still be able to get a small output page on the rendered website that contains just the content of the given header. Such split pages:
For example given an input file called hello.bigb and containing:
= h1

h1 content.

A link to another section: \x[h1-1].

== h1 1

h1-1 content.

== h1 1 1

h1-1-1 content.

== h1 1 2

h1-1-2 content.
a conversion command:
ourbigbook --split-headers hello.bigb
would produce the following output files:
  • hello.html: contains the entire rendered document as usual.
    Remember that this is called hello.html instead of h1.html because the toplevel header ID is automatically derived from its filename.
    Each header contains a on-hover link to the single-file split version of the header.
  • hello-split.html: contains only the contents directly under = h1, but not under any of the subheaders, e.g.:
    • h1 content. appears in this rendered output
    • h1-1-1 does not appear in this rendered output
    The -split suffix can be customized with the \H splitSuffix argument option. The -split suffix is appended in order to differentiate the output path from hello.html
  • h1-1.html, h1-1-1.html, h1-1-2.html: contain only the contents direcly under their headers, analogously to hello-split.html, but now we don't need to worry about the input filename and collisiont, and just directly use the ID of each header
--split-headers is implied by the --publish option: the published website will automatically get the split pages. There is no way to turn it off currently. A pull request would be accepted, especially if it offers a ourbigbook.json way to do it. Maybe it would be nice to have a more generalized way of setting any CLI option equivalent from the ourbigbook.json, and an option cli vs cli-publish so that cli-publish is publish only. Just lazy for now/not enough pressing use case met.
By default, all cross references point to the non-split version of headers, including those found in split headers.
The rationale for this is that it gives readers the most context around the header by simply scrolling.
For example, considering the example document at -S, --split-headers, cross references such as \x[h1-1] would point:
  • from the non-split hello.html to the section in the current non-split file #h1-1
  • from split hello-split.html to the same section in non-split file with hello.html#h1-1
The same applies to cross file references when there are multiple input files.
In order to make the split version be the default for some headers, you can use the \H splitDefault argument.
This is something that we might consider changing with some option, e.g. keeping the split headers more self contained. But for now, the general feeling is that going to nosplit by default is the best default.
When converting a file, output output to stdout in addition to outputting to a file:
convert --stdout input.bigb
The regular output file is also saved.
Cannot be used when converting a directory.

5.5.25. --template

words: 663 articles: 3
Select a custom Liquid template file for the output.
If not given, this option defaults to the value of template, which if not given defaults to ourbigbook.liquid.html.
The repository of this documentation for example has a sample ourbigbook.liquid.html at: ourbigbook.liquid.html.
If no template is present, the default template at one point was:
<!doctype html>
<html lang=en>
<head>
<meta charset=utf-8>
<title>{{ title }}</title>
<style>{{ style }}</style>
</head>
<body class="ourbigbook">
{{ body }}
</body>
</html>
This will get out of sync sooner or later with the code, but this should still serve as a good base example for this documentation.
Defined variables:
  • body: the rendered body
  • dir_relpath: relative path from the rendered output to the _dir directory. Sample usage to link to the root directory listing:
    <div><a href="{{ dir_relpath }}{{ html_index }}">Website source code</a></div>
  • git_sha: SHA of the latest git commit of the source code if in a git repository
  • github_prefix: this variable is set only if if the "github" media provider. It points to the URL prefix of the provider, e.g. if you have in your ourbigbook.json:
    "media-providers": {
      "github": {
        "remote": "mygithubusername/media"
      },
    then you can use media from that repository with:
    <img src="image/x-icon" href="{{ github_prefix }}/myimage.jpg" />
  • html_ext: .html for local renders, empty for server renders.
    So e.g. to link to an ID myid you can use:
    <a href="{{ root_relpath }}myid{{ html_ext }}">
    This will ideally be replaced with a more generic link to arbitrary ID mechnism at some point: github.com/ourbigbook/ourbigbook/issues/135
  • html_index: /index.html for local renders, empty for server renders
  • input_path: path to the OurBigBook Markup source file relative to the project toplevel directory that generated this output, e.g. path/to/myfile.bigb
    May be an empty string in the case of autogenerated sources, notably automatic directory listings, so you should always check for that with something like:
    {% if input_path != "" %}
    <div>Source code for this page: <a href="{{ raw_relpath }}/{{ input_path }}">{{ input_path }}</a></div>
    {% endif %}
  • is_root_relpath. Boolean. True if the toplevel being rendered on this output file is the the index article. E.g. in:
    README.bigb
    = John Smith's homepage
    
    == Mathematics
    with split header conversion, the value of is_root_relpath would be:
    • index.html: true
    • split.html: true
    • mathematics.html: false
  • root_page: relative path to the toplevel page, e.g. either index.html, ../index.html locally or ./, ../ on server oriented rendereing
  • root_relpath: relative path from the rendered output to the toplevel directory.
    This allows for toplevel resources like CSS to be found seamlessly form inside subdirectories, specially when rendering locally.
    For example, for the toplevel CSS main.css which is generated from main.scss, we can use:
    <link rel="stylesheet" type="text/css" href="{{ root_relpath }}main.css">
    Then, when a file is locally, for example under a subdirectory mysubdir/myfile.html, OurBigBook will set:
    root_relpath=../
    giving the desired:
    <link rel="stylesheet" type="text/css" href="../main.css">
    And if the output path were instead just myohterfile.html, root_relpath expands to an empty string, giving again the correct:
    <link rel="stylesheet" type="text/css" href="main.css">
    This will ideally be replaced with a more generic link to arbitrary ID mechnism at some point: github.com/ourbigbook/ourbigbook/issues/135
  • raw_relpath: relative path from the rendered output to the _raw directory. Should be used to prefix all non-OurBigBook Markup output resources, which is the directory where such files are placed during conversion, e.g.
    <link rel="shortcut icon" href="{{ raw_relpath }}/logo.svg" />
  • file_relpath: similar to raw_relpath, but link to the _file output directory instead
  • style: default OurBigBook stylesheets
  • title
We pick Liquid because it is server-side safe: if we ever some day offer a compilation service, Liquid is designed to prevent arbitrary code execution and infinite loops in templates.
ourbigbook.liquid.html is the default template file name used for --template as mentioned at template.
5.5.25.2. Template variable
words: 64 articles: 1
true iff the --publish-target is a standard website, i.e. something that will be hosted publicly on a URL. This is currently true for the following publish targets:
  • --publish-target github-pages
and it is false for the following targets:
  • --publish-target local
This template variable is useful to remove JavaScript elements that only work on public websites and not on localhost or file:, e.g.:
  • Google Analytics
  • Giscus
Read tiles from stdin line by line on a while loop and output IDs to stdout only, performing automatic ID from title conversion on each input line.
Sample usage:
( echo 'Hello world'; sleep 1; echo 'C++ is great'; sleep 1; echo 'β Centauri' ) | ourbigbook --title-to-id
outputs:
hello-world
c-plus-plus-is-great
beta-centauri
each with one second intervals between each line.
The original application of this option was to allow external non Node.js processes to be able to accurately calculate IDs from human readable titles since the non-ASCII handling of the algorithm is complex, and hard to reimplement accurately.
From Python for example one may run something like:
from subprocess import Popen, PIPE, STDOUT
import time

p = Popen(['ourbigbook', '--title-to-id'], stdout=PIPE, stdin=PIPE)

p.stdin.write('Hello world\n'.encode())
p.stdin.flush()
print(p.stdout.readline().decode()[:-1])

time.sleep(1)

p.stdin.write('bonne journeé\n'.encode())
p.stdin.flush()
print(p.stdout.readline().decode()[:-1])
This option enables actions that would allow arbitrary code execution, so you should only pass it if you trust the repository author. Enabled functionality includes:
Don't quit ourbigbook immediately.
Instead, watch the selected file or directory for changes, and rebuild individual files when changes are detected.
Watch every .bigb file in an entire directory:
ourbigbook --watch .
When a directory is given as the input path, this automatically first does an ID extraction pass on all files to support cross file references.
Now you can just edit any OurBigBook file such has README.bigb, save the file in your editor, and refresh the webpage and your change should be visible, no need to run a ourbigbook command explicitly every time.
Exit by entering Ctrl + C on the terminal.
Watch a single file:
ourbigbook --watch README.bigb
When a single file is watched, the reference database is not automatically updated. If it is not already up-to-date, you should first update it with:
ourbigbook .
otherwise you will just get a bunch of undefined ID errors every time the input file is saved.
TODO: integrate Live Preview: asciidoctor.org/docs/editing-asciidoc-with-live-preview/ to also dispense the browser refresh.

5.5.29. -W, --web (Web upload)

words: 358 articles: 1
Sync local directory to OurBigBook Web instead of doing anything else.
To upload the entire repository, run from toplevel:
ourbigbook --web
To update just all IDs in a single physics.bigb source file use:
ourbigbook --web physics.bigb
This requires that all external IDs that physics.bigb might depend on have already been previously uploaded, e.g. with a previous ourbigbook --web from toplevel.
The source code is uploaded, and conversion to HTML happens on the server, no conversion is done locally.
This option is not amazing right now. It was introduced mostly to allow uploading the reference demo content from cirosantilli.com to ourbigbook.com/cirosantilli, and it is not expected that it will be a major use case for end users for a long time, as most users are likely to just edit on OurBigBook Web directly.
Some important known limitations:
  • every local file has to be uploaded every time to check if it needs rebuilding or not by comparing old vs new file contents. At Store SHA of each article + descendants and skip API re-renders for entire subtrees we describe a better Git-like Merkle tree method where entire unchanged subtress can be skipped, that will be Nirvana.
  • file renaming does not work, it will think that you are creating a new file and blows up duplicates
  • if there's an error in a later file, the database is still modified by the previous files, i.e. there is no atomicity. A way to improve that would be to upload all files to the server in one go, and let the server convert everything in one transaction. However, this would lead to a very long server action, which would block any other incoming request (I tested, everything is single threaded)
However, all of those are fixable, and in an ideal world, will be fixed. Patches welcome.
If you delete a header locally and then do -W, --web upload, the article is currently not removed from web.
Instead, we simply make its content become empty.
The reason for this is that the article may have metadata created by other users such as OurBigBook Web discussions, which we don't want to delete remove.
In order to actually remove the header you should follow the procedure from Section 7.1.5. "OurBigBook Web page renaming", which instead first moves all discussions over to a new article before deleting.
Ask for the password in an interactive terminal in case there was a default password that would have otherwise been chosen.
Currently the only case where this happens is --web-test which automatically sets a default --web-password asdf.
-W, --web dry run, skip any operations that would interact with the OurBigBook Web server, doing only all the local preparation required for upload.
This is mostly useful for testing the OurBigBook CLI.
Upload only the selected ID with -W, --web.
That ID must belong to a file being converted for everything to work well. e.g.:
ourbigbook --web --web-id quantum-mechanics physics.bigb
Force ID extraction on -W, --web, even if article content is unchanged.
The only use case so far for this has been as a hack for incomplete database updates.
The correct approach is instead to actually re-extract server side as part of the migration. We should do this by implementing a Article.reextract analogous to Article.rerender, and a helper web/bin/rerender-articles.js.
Force remote render of -W, --web, don't skip it if even if the render is believed to be up-to-date with source.
This is analogous to -F, --force-render.
--web-force-render does not skip the local pre-conversion to split bigb format that is done before upload, only the remote render. Conversely, when used together with -W, --web, -F, --force-render does wkip the local bigb conversion, and not the remove one.
Render up to a maxinum of N articles.
Useful for quick and dirty OurBigBook Web performance benchmarking, especially together with --web-force-render to avoid skipping over finished files.
Update the nested set index of OurBigBook Web, don't do anything else. Implies -W, --web.
This option was originally introduced to hep testing bulk nested set updates.
Only update the nested set index after all articles have been uploaded.
There is a complex time tradeoff between using this option or not, which depends on:
  • how many articles the user has
  • how many articles are being uploaded
This option was initially introduced for Wikipedia bot uploads. At 104k articles, the bulk update takes 1 minute, but each individual update of an empty article takes about 6 seconds (and is dominated by the nested set update time), making this option an indispensable time saver for the initial upload in that case
Therefore in that case, for less than 10 articles you are better off without this option. But with more thatn 10 articles you would want to use it.
This rule of thumb should scale for smaller deployments as well however. E.g. at 10k articles, both individual updates and bulk updates should be 10x faster, so the "use this option for 10 or more articles" rule of thumb should still be reasonable.
Set password from CLI. Really bad idea for non-test users with fixed dummy passwords due e.g. to Bash history.
Set defaults for --web-* options that are useful for testing locally:
ourbigbook --web-test
is equivalent to:
ourbigbook --web --web-url http://localhost:3000 --web-user barack-obama --web-password asdf
You can also override those defaults by just specifying them normally, e.g. to do a different user:
ourbigbook --web-test --web-user donald-trump
Set a custom URL for -W, --web from the command line. If not given, the canonical ourbigbook.com is used. This option is used e.g. for testing locally e.g. with:
ourbigbook --web --web-url http://localhost:3000
Also consider --web-url for local testing.
Set the username for -W, --web from the command line, e.g.:
ourbigbook --web --web-url http://localhost:3000 --web-user barack-obama
If not given:
  • use the latest previous successfull web login with ourbigbook --web if there are any. In that case, the CLI informs you with a message of type:
    Using previous username: barack-obama
  • otherwise, you will be prompted for it from the command line.

5.6. ourbigbook.json

words: 3k articles: 33
OurBigBook configuration file that affects the behaviour of ourbigbook for all files in the directory.
ourbigbook.json not used for input from stdin, since we are mostly doing quick tests in that case.
While ourbigbook.json is optional, it is used to determine the toplevel directory of a OurBigBook project, which has some effects such as those mentioned at the toplevel index file.
Therefore, it is recommended that you always have a ourbigbook.json in your project's toplevel directory, even if it is going to be an empty JSON containing just:
{}
For example, if you convert a file in a subdirectory such as:
ourbigbook subdir/notindex.bigb
then ourbigbook walks up the filesystem tree looking for ourbigbook.json, e.g.:
  • is there a ./subdir/ourbigbook.json?
  • otherwise, is there a ./ourbigbook.json?
  • otherwise, is there a ../ourbigbook.json?
  • otherwise, is there a ../../ourbigbook.json?
and so on.
If we reach the root path / and no ourbigbook.json is found, then we understand that there is no ourbigbook.json file present.
List of JavaScript regular expression. If a file path matches any of them, then override ignore and don't ignore the path. E.g., if you have several .scss examples that you don't want to convert, but you do want to convert the main.scss for the website itself:
"ignore": [
  ".*\\.scss"
]
"dontIgnore": [
  "main.scss"
]
Note however that if an upper directory is ignored, then we don't recurse into it, and dontIgnore will have no effect.
Analogous to dontIgnore but acts on ignoreRender rather than ignore.

5.6.3. ignore

words: 154
List of paths relative to the project toplevel directory that OurBigBook CLI will ignore, unless it also has a match in dontIgnore.
Each entry is a JavaScript regular expression, and it must match the entire path from start to end to count.
If a directory is ignored, all its contents are also automatically ignored.
Useful if your project has a large directory that does not contain OurBigBook sources, and you don't want OurBigBook to mess with it.
Only ignores recursive conversions, e.g. given:
  "ignore": [
    "web"
  ]
doing:
ourbigbook .
skips that directory, but
ourbigbook web/myfile.bigb
converts it because it was explicitly requested.
Examples:
  • ignore all files with a given extension;
    "ignore": [
      ".*\\.tmp",
    ]
    Yes, it is a bit obnoxious to have to escape . and the backslash. We should use some proper globbing library like: github.com/isaacs/node-glob. But on the other hand ignore from .gitignore makes this mostly useless, as .gitignore will be used most of the time.
TODO: also ignore during -w, --watch.
Similar to ignore, but only ignore the files from rendering converesions such as bigb -> html, scss -> css.
Unlike ignore, matching files are still placed under the _raw directory and can be publicly viewed.
You almost always want this option over ignore, with files that should not be in the repository being just ignored with your .gitignore instead: Section 5.8.1. "Ignore from .gitignore".

5.6.5. ourbigbook.json id

words: 280 articles: 4
Dictionary of options that control automatic ID from title generation.
5.6.5.1. id normalize latin
words: 209 articles: 1
If true, does Latin normalization on the title.
Default: true.
ASCII normalization is a custom OurBigBook defined normalization that converts many characters that look like Latin characters into Latin characters.
For now, we are using the deburr method of Lodash: lodash.com/docs/4.17.15#deburr, which only affects Latin-like characters.
In addition to deburr we also convert:
  • en-dash and em-dash to simple ASCII dash -. Wikipedia Loves en-dashes in their article titles!
  • greek letters are replaced with their standard latin names, e.g. α to alpha
One notable effect is that it converts variants of ASCII letters to ASCII letters. E.g. é to e removing the accent.
This operation is kind of a superset of Unicode normalization acting only on Latin-like characters, where Unicode basically only removes things like diacritics.
OurBigBook normalization on the other also does other natural transformations that Unicode does not do, e.g. æ to ae as encoded by deburr and further custom replacements.
TODO lodash.deburr:
Bibliography:
5.6.5.2. id normalize punctuation
words: 65 articles: 1
If true, does Punctuation normalization on the title.
Default: true.
Some selected punctuation marks are automatically converted into their dominant corresponding pronunciations. These are:
  • %: percent
  • &: and
  • +: plus
  • @: at
  • (Unicode minus sign, U+2212, distinct from the ASCII hyphen): minus
Dashes are added around the signs if needed, e.g.:
  • C++: c-plus-plus
  • Q&A: q-and-a
  • Folding@home: folding-at-home

5.6.6. lint

words: 134 articles: 2
Dictionary of lint options to enable. OurBigBook tries to be strict about forcing specific styles by default, e.g. forbids triple newline paragraph. But sometimes we just can't bear it :-)
Possible values:
  • parent: forces headers to use \H parent argument to specify their level
  • number: forces headers to not use \H parent argument to specify their level, i.e. to use a number or a number of =
You should basically always set either one of those on any serious project. Forgetting a parent= in a project that uses parent= everywhere else is a common cause of build bugs, and can be hard to debug without this type of linting enabled.
Possible values:

5.6.7. h

words: 321 articles: 3
This dictionnary stores options related to headers.
Sets the default \H numbered argument argument of the toplevel headers of each source file.
Note that since the option is inherited by descendants, this can also affect the rendering of ancestors.
github.com/ourbigbook/ourbigbook/issues/188 contains a proposal to instead inherit this property across includes.
If you set this ourbigbook.json option:
{
  "h": {
    "numbered": true
  }
}
it is possible to override it for a specific file with and explicit =0 \H numbered argument:
= Not numbered exception
{numbered=0}

== Child also inherits not numbered
Make every link to something that is not on the current page open on a new tab instead of the current one, i.e. add target="_blank" to such links.
This options is exactly analogous to the numbered option, but it affects the \H splitDefault argument instead of the \H numbered argument.
If given, the toplevel output of each input source is always non-split, and a split version is not generated at all.
This of course overrides the \H splitDefault argument for toplevel headers, making any links go to the non split version, as we won't have a split version at all in this case.
E.g.:
ourbigbook.json
{
  "h": {
    "splitDefault": true,
    "splitDefaultNoToplevel": true,
  }
}
my-first-header.bigb
= My first header

== My second header
When converted with:
ourbigbook --split-headers my-first-header.bigb
would lead only to two output files:
  • my-first-header: not split
  • my-second-header: split
Without splitDefaultNoToplevel we would instead have:
  • my-first-header: split
  • my-first-header-nosplit: not split
  • my-second-header: split
The initial use case for this was in OurBigBook Web. If we didn't do this, then there would be two versions of every article at the toplevel of a file: split and nosplit.
This would be confusing for users, who would e.g. see two new articles on the article index every time they create a new one.
It would also mean that metadata such as comments would be visible in two separate locations.
So instead of filtering the duplicate articles on every index, we just don't generate them in the first place.
If false, implies --no-html-x-extension.
The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
The media-providers entry of ourbigbook.json specifies properties of how media such as images and videos are retrieved and rendered.
The general format of media-providers looks like:
"media-providers": {
  "github": {
    "default-for": ["image"], // "all" to default for both image, video and anything else
    "path": "data/media/",    // data is gitignored, but should not be nuked like out/
    "remote": "ourbigbook/ourbigbook-media"
  },
  "local": {
    "default-for": ["video"],
    "path": "media/",
  },
  "youtube": {}
}
Properties that are valid for every provider:
  • default-for: use this provider as the default for the given types of listed macros.
    The first character of the macros are case insensitive and must be given as lower case. Therefore e.g.:
    • image applies to both image and Image
    • giving Image is an error because that starts with an upper case character
  • title-from-src (bool): extract the title argument from the src by default for media such as images and videos as if the titleFromSrc macro argument had been given, see also: Section 4.2.7.1. "Image ID"
Direct children of media-providers and subproperties that are valid only for them specifically:
  • local: tracked in the current Git repository as mentioned at Section 4.2.7.2.1. "Store images inside the repository itself"
    • path: location of the cloned local repository relative to the root the main repository
  • github: tracked in a separate Git repository as mentioned at Section 4.2.7.2.2. "Store images in a separate media repository"
    • path: analogous to path for local: a local location for this GitHub provider, where the repository can optionally be cloned.
      When not during a run with the --publish option, OurBigBook checks if the path exists locally, and if it does, then it uses that local directory as the source intead of the GitHub repository.
      This allows you to develop locally without Internet and see the latest version of the images without pushing them.
      During publishing, the GitHub version is used instead.
      TODO make this even more awesome by finishing to implement github.com/ourbigbook/ourbigbook/issues/184:
      • automatically git push this repository during deployment to ensure that any asset changes will be available.
      • ignore the path from OurBigBook conversion as if added to ignore, and is not added to the final output, because you are already going to have a copy of it.
        This way you can use the sanes approach which is to track the directory as a Git submodule as mentioned at: store images in a separate media repository and track it as a git submodule, instead of either:
        • keeping it outside of the repository
        • keeping it in the repository but explicitly ignoring it as well, which is a bit redundant
    • remote: <github-username>/<repo-name>
  • youtube: YouTube videos
See also: github.com/ourbigbook/ourbigbook/issues/40
Default: true
If true place the HTML output under the out directory at out/html.
For example with:
{
  "outputOutOfTree": false
}
then
ourbigbook hello.bigb
would be place its output under:
hello.html
instead of out/html/hello.html.
Advantages of outputOutOfTree=true:
  • the source tree becomes cleaner, especially when using -S, --split-headers which can produce hundreds of output files from a single input file
  • if you want to track several .html source files in-tree, you don't need to add an exception to each of of them on the .gitignore as:
    *.html
    !/ourbigbook.liquid.html
Disadvantages:
  • you have to type more to open each output file on the terminal
This option is always forced to false when --outdir <outdir> is given.
Implemented at: github.com/ourbigbook/ourbigbook/issues/163
Path of a script that gets executed after conversion, and before upload, when running with the --publish option.
The script arguments are:
  • the publish output directory.
    That directory is guaranteed to exist when prepublish is called.
    For git-based publish targets, all files are almost ready in there, just waiting for a git add . that follows prepublish.
    This means that you can use this script to place or remove files from the final publish output.
If the prepublish script returns with a non-zero exit value, the publish is aborted.
If given, use this fixed date as the author and comitter date of the publish commit.
All Git date formats are accepted as documented in man git-commit, e.g. 2005-04-07T22:13:13.
ourbigbook.json options that should be used only on the published output when publishing with the --publish option.
If given these options override pre-existing options on the published output.
Only options that get passed to OurBigBook Library currently take effect, options that affect e.g. only the ourbigbook executable don't work currently. Lazy.
A custom remoteUrl to push build outputs to.
If not given this value is extracted by default from the origin remote of the Git repository were the source code is located in.
Generate custom redirects.
For example:
"redirects": [
  ["cirodown", "ourbigbook"]
],
produces a file in the output called cirodown.html that redirects to ourbigbook.html.
Absolute URLs are also accepted, e.g.:
"redirects": [
  ["ourbigbook", "https://docs.ourbigbook.com"]
],
produces a file in the output called ourbigbook.html that redirects to https://docs.ourbigbook.com.
When dealing with regular headers, you generally don't want to use this option and instead use the \H synonym argument, which already creates the redirection for you.
This JSON option can be useful however for dealing with things that are outside of your OurBigBook project.
For example, at one point, this project renamed the repository github.com/cirosantilli/cirodown to github.com/ourbigbook/ourbigbook.
Unfortunately, GitHub Pages does not generate redirects like github.com itself.
So in this case, we've added to the ourbigbook.json of the toplevel user repository github.com/cirosantilli/cirosantilli.github.io the lines:
"redirects": [
  ["cirodown", "ourbigbook"]
],
which produces a file in the output called cirodown.html that redirects to ourbigbook.html.
In this case, cirodown and ourbigbook don't have to be any regular IDs present in the database, those strings are just used directly.
TODO ideally we should check for conflicts with regular output from split headers IDs or their synonyms. But lazy.
Select the template Liquid file to use.
Serves as the default for the template.
If this option is not given, and if a file ourbigbook.liquid.html exists in the project, then that file is used.
If ourbigbook.liquid.html exists but you don't want to use it, set the option to null and it won't be used.
Make every internal cross reference point to the split header version of the pages of the website. Do this even if those pages don't exist, or if they are not the default target e.g. as per the \H splitDefault argument.
The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
If this option is set, then nosplit/split header metadata links are removed, since it was hard to come up with a sensible behaviour to them, and it won't matter on web redirection where every page is nonsplit anyways.

5.6.19. web

words: 199 articles: 4
This dict contains options related to interaction between OurBigBook CLI and OurBigBook Web deployments.
5.6.19.1. host
words: 27
Select the default host used for
Defaults to ourbigbook.com, the reference Web instance.
Capitalized version of host, e.g. OurBigBook.com.
Default:
  • if host is given, use it
  • otherwise, OurBigBook.com
Shows up on linkFromStaticHeaderMetaToWeb as a potentially more human readable version of the hostname.
Type: boolean. Default: false.
If true, adds a link under the metadata section of every header of a OurBigBook CLI static website pointing to the corresponding article on OurBigBook.com, or another OurBigBook Web instance specified by the host option.
It also sends you to Heaven for supporting the project.
This option requires username to be set.
For example, if you set:
"web": {
  "username": "myusername",
  "linkFromStaticHeaderMetaToWeb": true
}
then in the rendering of a README.bigb:
= Index

== My h2
{scope}

=== My h2 2
{scope}
those headers would have a metadata entry pointing respectively to:
  • https://ourbigbook.com/myusername
  • https://ourbigbook.com/myusername/my-h2
  • https://ourbigbook.com/myusername/my-h2/my-h2-2
In order for such links not to be broken, you should always first do a Web upload to ensure that the articles are present on OurBigBook.com.
Previously named linkFromHeaderMeta.
Type: string.
Sets your OurBigBook.com username. This is used e.g. by linkFromStaticHeaderMetaToWeb.

5.6.20. xPrefix

words: 133
If given, prepend the given string to every single internal cross file reference output.
The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
E.g. suppose that you previously had at myoldsite.com you had:
animal.bigb
= Animal

<Dogs> don't eat <bananas>.

== Dog
plant.bigb
= Plant

== Banana
Originally that would render as:
<a href="#dog">Dogs</a> don't eat <a href="plant#banana">bananas</a>.
But then if you set in ourbigbook.json:
{
  "xPrefix": "https://mynewsite.com/"
}
it will instead render as:
<a href="#dog">Dogs</a> don't eat <a href="https://mynewsite.com/plant#banana">bananas</a>.
where:
  • dogs: untouched as it links to the same page as the current one
  • bananas: the prefix is added, as it is on another page
Scopes are automatically resolved so that they will also be present in the target. E.g. in:
subdir/notindex.bigb
<notindex2>
subdir/notindex2.bigb
= Notindex2
we get on subdir/notindex.html:
<a href="https://mynewsite.com/subdir/notindex2.html">
and not:
<a href="https://mynewsite.com/notindex2.html">
This section describes how to generate mass redirects from a static website such as cirosantilli.com to a OurBigBook Web dynamic website such as ougbook.com/cirosantilli.
The use case of this is if you are migrating from one domain to another, and want to keep old files around to not break links, but would rather redirect users to the new preferred pages instead to gather PageRank there.
This happened in our case when Ciro felt that OurBigBook Web had reach enough maturity to be a reasonable reader alternative to the static website.
Basically what you want to do in that case is to use the following options:
as in:
"publishOptions": {
  "toSplitHeaders": true,
  "htmlXExtension": false,
  "xPrefix": "https://ourbigbook.com/cirosantilli/"
},

5.8. Ignored files

words: 84 articles: 1
The following files are ignored from conversion:
Note that this applies even if you try to convert a single ignored file such as:
ourbigbook ignored.bigb
We are strict about this in order to prevent accidentally polluting the database with temporary data.
If the project is a Git tracked project, the standard git ignore rules are used for ignores. This includes .git/info/exclude, .gitignore and the user's global gitingnore file if any.
TODO: get this working. Maybe we should also bake it into the ourbigbook CLI tool as well for greater portability. Starting like this as a faster way to prototype:
rm -rf out/parallel
mkdir -p out/parallel
# ID extraction.
git ls-files | grep -E '\.bigb$' | parallel -X ourbigbook --no-render --no-check-db --outdir 'out/parallel/{%}' '{}'
./merge-dbs out/db.sqlite3 out/parallel/*/db.sqlite3
ourbigbook --check-db
# Render.
git ls-files | grep -E '\.bigb$' | parallel -X ourbigbook --no-check-db '{}'
Observed --no-render speedup on 1k small files from the Wikipedia bot and 8 cores: 3x. So not bad.
Observed render speedup on 1k small files from the Wikipedia bot and 8 cores: none. TODO. Is this because of database contention?
The main entry point for the JavaScript API is the ourbigbook.convert function.
An example can be seen under lib_hello.js.
Note that while doing a simple conversion is easy, things get harder if you want to take multi-file features in consideration, notably cross file reference internals.
This is because these features require interacting with the ID database, and we don't do that from the default ourbigbook.convert API because different deployments will have very different implementations, notably:
  • local Node.js run uses SQLite, an implementation can be seen in the ourbigbook file class SqlDbProvider
  • the in-browser version that runs in the browser editor of the OurBigBook Web makes API calls to the server
These are variables that affect the OurBigBook Library itself, and therefore also get picked up by OurBigBook CLI and OurBigBook Web.
For boolean environment variables, the value of "true" should use be 1, e.g. as in:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
Every other value is considered false, including e.g. true.
OurBigBook Web is the software program that powers OurBigBook.com, its official flagship instance, see also: Section 7.1.8. "OurBigBook.com". OurBigBook Web is currently the main tool developed by the OurBigBook Project.
This section contains both end user documentation and developer documentation.
More subjective rationale, motivation and planning aspects of the project are documented at: cirosantilli.com/ourbigbook-com.

7.1. OurBigBook Web user manual

words: 3k articles: 15
OurBigBook Web is a bit like Wikipedia, but where each user can have their own version of each page, and it cannot be edited by others without permission.
And it is also a bit like Obsidian (a personal knowledge base): you can optionally keep all your notes in plaintext markup files in your computer and publish either on OurBigBook.com or as a static HTML website on your own domain.
The goal of the OurBigBook Project is to make university students write perfect natural sciences books for free as they are trying to learn for their lectures.
Suppose that Mr. Barack Obama is your calculus teacher this semester.
Being an enlightened teacher, Mr. Obama writes everything that he knows on his OurBigBook.com account. His home page look something like the following tree:
On your first day of class, Mr. Obama tells his students to read the "Calculus" section, ask him any questions that come up online, and just walks away. No time wasted!
While you are working through the sections under "Calculus", you happen to notice that the "Fundamental theorem of calculus" article is a bit hard to understand. Mr. Obama is a good teacher, but no one can write perfect tutorials of every little thing, right?
This is where OurBigBook comes to your rescue. There are two ways that it can help you solve the problem:

7.1.1. OurBigBook Web topics

words: 364 articles: 1
Topics group articles that have the same title by different users. This feature allows you to find the best article for a given topic, and it is one of the key innovations of OurBigBook Web.
Topics are a bit like Twitter hashtags or Quora questions: their goal is to centralize knowledge about a specific subject by different people at a single location.
Video 12. OurBigBook Web topics demo. Source.
Figure 25.
Mr. Obama's article about the fundamental theorem of calculus was not very good, and we were left clueless.
But we can see that there are 3 articles in total about "Fundamental theorem of calculus", 2 of them by other authors, so maybe one of the others will help!
Figure 26.
After clicking that button, we reach the "Fundamental theorem of calculus" topic page.
Here we see that there are 3 articles in total. The one by Mr. Trump has 1 vote, while the others have zero, so Trump's appears on top. So maybe that is the best one!
After a quick read, it does look like it might be interesting. Let's click on "Read the full article" to also see the descendant articles by Mr. Trump.
Figure 27.
That's clearly, a superior article that will enlighten our problem.
Donald must either be a very talented teacher or a hard working student from some other university. Thanks Mr. Trump!
If even existing topics and discussions have failed you, and you have finally understood a subject after a few hours of Googling, why not share your knowledge by creating a new article yourself?
There are a few ways to do that.
Figure 28. One easy way to create an article about a given topic is to use the "New Article in Topic" button from the topic page. URL: ourbigbook.com/go/topic/fundamental-theorem-of-calculus
Figure 29. Another option is to click the "Create my version of this topic button". URL: ourbigbook.com/barack-obama/mathematics#fundamental-theorem-of-calculus
Figure 30. If you click do any of the above links, you will be redirected to the editor page, and the title will be preset. By simply using that exact same title to create your new article, your article will then appear in the correct "Fundamental theorem of calculus" topic where others might see it. ourbigbook.com/go/new?title=Proof%20of%20the%20fundamental%20theorem%20of%20calculus.
OurBigBook Web implements what we call "dynamic article tree".
What this means is that, unlike the static website generated by OurBigBook CLI where you know exactly which headers will show as children of a given header, we just dynamically fetch a certain number of descendant pages at a time.
As an example of dynamic artic tree, note how the article "Special relativity" can be seen in all of the following pages:
The only efficient way to do this is to pick which articles will be rendered as soon as the user makes the request, rather than having fully pre-rendered pages, thus the name "dynamic".
Video 13. OurBigBook Web dynamic article tree demo. Source.
The design goals of the dynamic article tree are to produce articles such that:
  • each article can appear as the toplevel article of a page to get better SEO opportunities
  • and that page that contains the article can also contain as many descedants as we want to load, not jus the article itself, so as to not force readers to click a bunch of links to read more
For example, with a static website, a user could have a page structure such as:
natural-science.bigb
= Natural science

== Physics

\Include[special relativity]
special-relativity.bigb
= Special relativity

== Lorentz transformation
In the static output, we would have two output files with multiple pages:
  • natural-science.html
  • special-relativity.html
plus one split output file for each header if -S, --split-headers were enabled:
  • natural-science-split.html
  • physics.html
  • special-relativity-split.html
  • lorentz-transformation.html
In this setup the header "Physics" for example is present in one of two possible pages:
  • natural-science.html: as a subheader, but Special Relativity is not shown even though it is a child
  • physics.html: as the top header, and Special Relativity is still not shown as we are in split mode
In the case of the dynamic article tree however, we achieve our design goals:
  • "Physics" is the toplevel header, and therefore can get much better SEO
  • "Special Relativity", "Lorentz transformation" and any other descendants will still show up below it, so it is much more readable than a page
We then just cut off at a certain number of articles to not overload the server and browsers on very large pages. Those pages can still be accessed through the ToC, which is currently unlimited. We also want to implement: load more articles to allow users to click to load more articles.
And all of that is achieved:
  • without requiring authors to manually determine which headers are toplevel or not to customize page splits with reasonable load sizes.
  • without keeping multiple copies of the render output of each page and corresponding pre-rendered ToCs. On the static website, we already had two rendering for each page: one split and one non-split, and the ToCs were huge and copied everywhere. Perhaps the ToC side could be resolve with some runtime fetching of static JSON, but then that is bad for SEO.
The downside of the feature is slightly slower page loads and a bit more server workload. We have kept it quite efficient server-side by implementing the page fetching with a nested sets implementation.
We believe that dynamic article treee offers a very good tradeoff between server load, load speeds, SEO, readability and author friendliness.
Each article has their own discussion section. This way you can easily see if other students have had the same problem as you and asked about it already.
Figure 31. Click the "Discussions" button to see if other people hit the same problem as you and created a discussion thread for it. We see that there are 3 total discussions about this header, so let's check them out. URL: ourbigbook.com/barack-obama/mathematics#fundamental-theorem-of-calculus
Figure 32. From these 3 recent threads, the first one has one like, and it seems that someone found the same difficulty that we did. URL: ourbigbook.com/go/issues/barack-obama/fundamental-theorem-of-calculus
Figure 33. So let's inspect that discussion. Ah, clearly, the comments are very illuminating, our problem is solved! URL: ourbigbook.com/go/issue/1/barack-obama/fundamental-theorem-of-calculus

7.1.4. OurBigBook Web editor

words: 149 articles: 1
OurBigBook Web comes with a browser text editor where users can create and edit their articles in OurBigBook Markup.
This is for example the editor you see when creating a new article at: ourbigbook.com/go/new
One day want to add an option to have a visual editor: Section "WYSIWYG", but for now we'll try to make the text-editor as awesome as we can.
Marking a page as the child of another page is easy in OurBigBook Web: you can simply set the parent of the page directly on the editor UI.
If you don't want the article to be the first child of a parent, you can also set the "previous sibling" field. This specifies after which article the new article will be inserted.
Video 14. OurBigBook Web parent selection on the web UI. Source.
Video 15. OurBigBook Web error reporting. Source. Note that Video 14. "OurBigBook Web parent selection on the web UI" is not present on this video as the feature wasn't present when the video was made, but the error reporting remains valid.
The current setup works as follows.
Suppose you have a page titled:
Calculus
and therefore with an ID calculus that appears under: ourbigbook.com/barack-obama/calculus
Suppose you want to rename it to "Calculus 2" to have an ID of calculus-2.
The procedure is:
  • set title to Caculus 2
  • set Calculus as a synonym of the article, but adding to the top of the article body:
    Calculus
    {synonym}
As a result of this:
This is not super user friendly, and could be made better by:
There are currently a few constructs that are legal in OurBigBook CLI but forbidden in Web and will lead to upload errors. TODO we should just make those forbidden on CLI by default with a flag to re-enable if users really want to make their source incompatible with web:
OurBigBook.com is the reference public instance of OurBigBook Web!
This section describes policies specific to that instance, and which don't necessarily apply to other instances people may host elsewhere.

7.1.9. OurBigBook.com policies

words: 926 articles: 4
This session describes policies specific to the OurBigBook.com instance of OurBigBook Web.
Documentation present under OurBigBook Web user manual describes OurBigBook Web in general.
These policies only applies to the official reference OurBigBook.com instance. If you host your own OurBigBook Web, there are no constraints imposed on your content, only on the source code as per LICENSE.txt.
All content that you upload that you own copyright for is automatically dual licensed as under the Creative Commons CC BY-SA 4.0. This is for example the same license family used by Wikipedia.
If you don't own the copyright for a work, you may still upload it if its license allows for "perpetual (non-expiring) and non-revocable" usage. This allows for example for:
  • all Creative Commons licenses
  • GNU General Public License
and so on.
Note however that the "non-commercial" (NC) and "no derivatives" (ND) CC license are basically legal minefields as it can be very subjective to decide what counts as commercial or a derivative, and so we will immediately take down material upon copyright owner request as we are not ready to test this in court!
For example:
  • it has not yet been decided if the OurBigBook Project will be ran as a not for profit or for profit organization. If a for-profit model is chosen, NC copyright owners could feel that their content being merely hosted on ourbigbook.com might constitute a for-profit usage as it could help bring publicity to the site.
    The project makes the following commitment however: if ever a way if found to make money from the project, all NC content will be excluded from any directly monetizable money-making activities, e.g. ads or otherwise.
  • which of the following consist of a derivative or not:
    • a table of contents that mirrors a ND work, but without the actual contents, which would automatically be filled with "the most upvoted article in a given topic"
    • a section of ND content without the rest of the work?
    • ND content but with extra article interlinking added?
    • ND content with IDs (such as HTML id= elements) but where IDs have been
    • a public modification request to an ND content?
Unfortunately, NC is extremely popular amongst academics, presumably due to professors hopes that one day their notes may become a book which will sell for money, or maybe simply for idealist reasons, and it would be too hard to fight against such licenses at this point in time.
Ultimately the project will have to decide if such licenses is worth the trouble or not, and if one day it seems apparent that it is not, a mass take down may happen. But for now we are willing to try. Wikimedia Commons for example has decided not to allow NC and ND.
Content that is not freely licensed might be allowed for upload under a fair use rationale. Fair use are murky waters. Wikipedia for example takes a very strict approach of very limited fair use: en.wikipedia.org/wiki/Wikipedia:Non-free_content, but we are more relaxed to it, and only take gray cases down upon copyright owner request.
Some examples of what should generally be OK:
  • quote up to a paragraph from a copyrighted book, clearly attributing it
  • explain what you've learned from a book or course in your own words.
    You also have to take some care to not copy the exact structure of the original, as that itself could be subject to copyright.
    One good approach is to just use several sources. If multiple sources use the same structure, then it is more arguable that this structure is not a novel copyrighted thing.
  • use a copyrighted image when there is no free alternative to illustrate what you are talking about
If the copyright owner complains in such cases, we might have to take something down, but as long as you are not just uploading a bunch of obviously copyrighted content, it's not the end of the world, we'll just find another freer way to explain things without them.
More egregious cases such as the upload of:
  • entire copyrighted books
  • copyrighted pieces of music
and so on will obviously be taken down preemptively as soon as noticed even without a take down request.
Anything you want, as long as it is legal. This notably includes not violating copyright, see also: OurBigBook.com content license.
At some distant point in the future we could start letting people self tag content that is illegal in certain countries or for certain age groups, and we could then block this content to satisfy the laws of each country.
Websites such as Wikipedia or Stack Exchange have a political system where users can gain priviledges, and once they have gained those priviledges, they can edit or delete your content.
In OurBigBook Web, unless you explicitly give other users permission to do so, only admins of the website can ever delete any content, and that will only ever be done if:
Admins will always be a small number of people, either employed by, or highly trusted by OurBigBook Project leaders. They are not community elected. Their actions may be reversed at anytime by the OurBigBook Project leadership.
We haven't implemented it yet, but it is an important feature that we will implement: you will be able to download all your content as a .zip file containing OurBigBook Markup files, and then you will be able to generate the HTML for your content on your own computer with the open source OurB implementation. There are then several alternative ways to host the generated HTML files, including free ones such as GitHub Pages.

7.2. OurBigBook Web development

words: 6k articles: 68
OurBigBook Web is a regular databased backed dynamic website. This is unlike the static websites generated by OurBigBook CLI:
  • static websites are simpler and cheaper to run, but they are harder to setup for non-programmers
  • static websites cannot have multiuser features such as likes, comments, and "view versions of this article by other users", which is are core functionality of the OurBigBook Project
The source for OurBigBook Web, source code is fully contained under the web/ directory of the OurBigBook Project source code. OurBigBook Web can be seen as a separate Node.js package which uses the OurBigBook Library as a dependency.
OurBigBook Web was originally forked from the following starter boilerplate: github.com/cirosantilli/node-express-sequelize-realworld-example-app. We are trying to keep tech synced as much as possible between both projects, since the boilerplate is useful as a tech demo to quickly try out new technologies in a more minimal setup, but it has started to lag a bit behind. The web stack of OurBigBook Web is described at: OurBigBook Web tech stack.
It is highly recommended that you use the exact same Node.js and NPM versions as given under package.json engines.js entry. The best way to do that is likely to use NVM as explained at: stackoverflow.com/questions/16898001/how-to-install-a-specific-version-of-node-on-ubuntu/47376491#47376491 Using NVM also removes the need for sudo from global install commands such as npm run link.
First time setup:
cd ourbigbook &&
npm run link &&
npm run build-assets &&
cd web/ &&
npm install &&
./bin/generate-demo-data.js --users 2 --articles-per-user 10
# Or short version:
#./bin/generate-demo-data.js -u 2 -a 10
where:
  • npm run build-assets needs to be re-run if any assets (e.g. CSS or Js file mentioned at overview of files in this repository) on the ./ourbigbook/ toplevel are modified. No need to re-run it for changes under web/.
    To develop files from outside web/, also consider:
    npm run webpack-dev
    as mentioned at: _obb directory.
  • web/bin/generate-demo-data.js also creates the database and is not optional. If you want to start with an empty database instead of the demo one, you can run instead web/bin/sync-db.js:
    ./bin/sync-db
We also provide a shortcut for that setup as:
npm run web-setup
./bin/generate-demo-data.js --users 2 --articles-per-user 10
After this initial setup, run the development server:
npm run dev
And the website is now running at localhost:3000. If you created the demo data, you can login with:
  • email: user0@mail.com, user1@mail.com, etc.
  • password: asdf
    Custom demo user passwords can be set by exporting the OURBIGBOOK_DEMO_USER_PASSWORD variable, e.g.:
    OURBIGBOOK_DEMO_USER_PASSWORD=qwer ./bin/generate-demo-data.js -u 2 -a 10
    this is useful for production.
To run on a different port use:
PORT=3001 npm run dev
We also offer shortcuts on toplevel for the npm install and npm run dev commands so you can skip the cd web for those:
npm install
npm run dev
Whenever you save any changes to the backend server, we listen to this and automatically restart the server, so after a few seconds or less, you can refresh the web page to obtain the backend update.
For frontend, changes are automatically recompiled by the webpack development server, so you can basically just refresh pages and they will be updated straightaway.

7.2.2. Generated data

words: 813 articles: 3
This bot imports the Wikipedia article category tree into OurBigBook. Only titles are currently imported, not the actual article content.
This is just an exploratory step to future exports or generative AI.
We don't have an amazing automation setup as we should, but the steps are:
Now let's look at the shape of the data. Total pages:
sqlite3 enwiki.sqlite 'select count(*) from page'
gives ~59M.
Total articles:
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 0'
gives ~17M.
Total non-redirect articles:
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 0 page_is_redirect = 0'
gives: ~6.7M
Categories:
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 14'
gives: ~2.3M.
Allowing for depth 6 of all of STEM:
./sqlite_preorder.py -D3 -d6 -Obigb -m -N enwiki.sqlite Mathematics Physics Chemistry Biology Technology
leads to ~980k articles.
Depth 6 on Mathematics only:
./sqlite_preorder.py -D3 -d6 -Obigb -m -N enwiki.sqlite Mathematics
leads to 150k articles. Some useless hogs and how they were reached:
  • Actuarial science via Applied Mathematis: 4k
  • Molecular biology via Applied geometry: 4k
  • Ship identification numbers via Numbers: 5k
  • Galaxies via Dynamical systems: 7k
  • Video game gameplay via Game design via Game theory: 17k
Depth 6 on Mathematics + Physics:
./sqlite_preorder.py -D3 -d5 -Obigb -m -N enwiki.sqlite Mathematics Physics
leads to 104k articles.
Allowing for unlimited depth on Mathematics:
./sqlite_preorder.py -D3 -Obigb -m -N enwiki.sqlite Mathematics
leads seems to reach all ~9M articles + categories , or most of them. We gave up around 8.6M, when things got really really slow, possibly due to heavy duplicate removal. We didn't log it properly, but depths of 3k+ were seen... so not setting depth is just pointless unless you want the entire Wiki.
7.2.2.2. Demo data
words: 343 articles: 1
You can generate demo data for OurBigBook Web with web/bin/generate-demo-data.js, e.g.:
cd web
./bin/generate-demo-data --users 2 --articles-per-user 10
Every time this is run, it tries to update existing entities such as users and articles first, and only creates them if they don't exist. This allows us to update all demo data on a live website that also has users without deleting any user data.
Note however that if you ever increase the ammount of demo users, you might overwrite real user data. E.g. if you first do:
./bin/generate-demo-data --users 2 --articles-per-user 10
and then some time later:
./bin/generate-demo-data --users 4 --articles-per-user 10
it is possible that some real user will have taken up the username that we use for the third user, which did not exist previously, and then hacks their articles away. So never ever do that! Just stick to the default values in production.
As a safeguard, to be able to run this in production you have to also pass the --force-production flag;
./bin/generate-demo-data --users 2 --articles-per-user 10 --force-production
To first fully clear the database, including any real user data, before doing anything else, use --clear, e.g.:
./bin/generate-demo-data --users 4 --articles-per-user 10 --clear
To clear the database and start with an empty database use --empty:
./bin/generate-demo-data --empty
To regenerate the PostgreSQL database instead of SQLite as mentioned at local development run with PostgreSQL:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
By default, when you run web/bin/generate-demo-data.js, besides inserting the data into the database directly, the command also generates a in-file-system tree that contains equivalent content under:
web/tmp/<username>/<id>.bigb
Sample paths to some files could be:
web/tmp/demo/barack-obama/ourbigbook.json
web/tmp/demo/barack-obama/test-child-1.bigb
web/tmp/demo/barack-obama/test-scope/test-scope-1.bigb
Because each user has its own ourbigbook.json file added to the directory, you can for example build each user directory in isolation with:
cd web/tmp/demo/barack-obama
ourbigbook .
This setup can be useful for quickly testing things locally, and in particular to test -W, --web upload to a local test server.
These files have nothing to do with OurBigBook Web specifically, and would be used from OurBigBook CLI itself. It would be nice to bring them up to OurBigBook CLI at some point, and only expose the Web-specific database functions from Web.

7.2.3. Log database queries

words: 333 articles: 6
There are a few methods available.
One option is to use the standard Express.js logging mechanism:
DEBUG='sequelize:sql:*' npm run dev
Shortcut:
npm run devs
These logs also include some kind of timing information. However, we are not entirely sure that the timings mean, as they show for both Executing (query is about to start) and Executed (query finished) lines with possibly different values e.g.:
sequelize:sql:pg Executing (default): SELECT 1+1 AS result +0ms
sequelize:sql:pg Executed (default): SELECT 1+1 AS result +1ms
The meaning of +0ms and +1ms appears to be the timing since last the last message with the same ID, i.e. sequelize:sql:pg in this case. Therefore, so long as there wasn't any sequelize:sql:pg between and the corresponding Executing, the Executing timing should give us the query time.
This is a bit messy however, as we often want to find the largest numbers for profiling, and there could be a large time delta during inactivity.
This tends to be better for benchmarking than DEBUG sql:
OURBIGBOOK_LOG_DB=1 npm run dev
which produces output of type:
Executed (default): SELECT 1+1 AS result Elapsed time: 0ms
so we get explicit elapsed time measurements rather than deltas, and without the corresponding Executing marker.
This method uses Sequelize's benchamrk: true option as per: stackoverflow.com/questions/52260934/how-to-measure-query-execution-time-in-seqilize.
It might be wise to enable PostgreSQL query logging by default with: log_statement for development. TODO does it noticeably affect performance?
See: stackoverflow.com/questions/722221/how-to-log-postgresql-queries
One major advantage of this method is that Sequelize's error logging is a bit crap, and sometimes the error appears much much more clearly in the PostgreSQL logs.
However, you often want to long only a few selected queries, otherwise it becomes very difficult to determine which query is which, in particular due asynchronous execution. In this case, use the technique mentioned at: stackoverflow.com/questions/21427501/how-can-i-see-the-sql-generated-by-sequelize-js/21431627#21431627 and just add:
logging: console.log,
to the code in the query you want to log.
Maybe we should do a better integration: stackoverflow.com/questions/70948142/how-to-indent-logged-queries-in-sequelize this is something that we do a lot:
npm install -g sql-formatter
xsel -b | sql-formatter -l postgresql
First run the first time setup from local development server.
Then, when running for the first time, or whenever frontend changes are made, you need to create optimized frontend assets with:
npm run build-dev
before you finally start the server each time with:
npm start
This setup runs the Next.js server in production mode locally. Running this setup locally might help debug some front-end deployment issues.
Building like this notably runs full typescript type checking, which is a good way to find bugs early.
But otherwise you will just normally use the local run as identical to deployment as possible setup instead for development, as that makes iterations quicker are you don't have to re-run the slow npm run build-dev command after every frontend change.
build-dev is needed instead of build because it uses NODE_ENV_OVERRIDE which is needed because Next.js forces NODE_ENV=production and wontfixed changing it: github.com/vercel/next.js/issues/4022#issuecomment-374010365, and that would lead to the PostgreSQL database being used, instead of the SQLite one we want.
build runs npm run build-assets on toplevel which repacks ourbigbook itself and is a bit slow. To speed things up during the development loop, you can also use:
npm run build-dev-nodeps
instead, which builds only the stuff under web/.
TypeScript type checking an also be run in isolation as mentioned at Section 7.2.9. "OurBigBook Web TypeScript type checking" with:
npm run tsc

7.2.5. OurBigBook Web PostgreSQL

words: 478 articles: 9
PostgreSQL is the database that we use on production, and sometimes is is necessary to test stuff with it locally.
There are two main types of run with PostgreSQL:
To interactively inspect the local development database use our helper at web/bin/psql:
web/bin/psql
Commands can be run as usual:
web/bin/psql -c 'SELECT * FROM "Article";'
It uses PGPASSWORD is mentioned at: stackoverflow.com/questions/6405127/how-do-i-specify-a-password-to-psql-non-interactively
Before running OurBigBook Web, the PostgreSQL database should be setup with web/bin/pg-setup:
web/bin/pg-setup
This command:
  • drops the existing database if any, i.e. nukes all data
  • creates a test user
  • re-creates the test database
Here we use PostgreSQL instead of SQLite with the prebuilt static frontend.
For when you really need to debug some deployment stuff locally.
Before the first run, do the OurBigBook Web PostgreSQL setup.
Then, after every modification
npm run build-prod
npm run start-prod
and then visit the running website at: localhost:3000/
To optionally nuke the database and create the demo data:
npm run seed-prod
or alternatively to start from a clean database:
psql -c "DROP DATABASE ourbigbook"
createdb ourbigbook
psql -c 'GRANT ALL PRIVILEGES ON DATABASE ourbigbook TO ourbigbook_user'
You can inspect the database interactively with:
psql ourbigbook
and then running SQL commands.
If you have determined that a bug is PostgreSQL specific, and it is easier to debug it interactively, first create the database as mentioned at local run as identical to deployment as possible and then:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
OURBIGBOOK_POSTGRES=1 npm run dev
or shortcut for the run:
npm run dev-pg
Note that doing sync-db also requires NODE_ENV=production as in:
NODE_ENV=production OURBIGBOOK_POSTGRES=1 bin/sync-db.js
because we have to shell out to the ugly migration CLI, and that only understands NODE_ENV.
Setup the database:
web/bin/pg-setup ourbigbook2
OURBIGBOOK_DB_NAME=ourbigbook2 web/bin/pg web/bin/generate-demo-data.js
Run the server:
OURBIGBOOK_DB_NAME=ourbigbook2 npm run dev-pg
Or commonly to run on a different port so that two instances may be accessed separately:
PORT=3001 OURBIGBOOK_DB_NAME=ourbigbook2 npm run dev-pg
To restore a dump to the secondary database:
web/bin/pg_restore -d ourbigbook2 latest.dump
Kill all queries that are a currently running on PostgreSQL database.
Useful in the sad cases that our recursive queries go infinite due to bugs.
web/bin/pg-kill-queries
#!/usr/bin/env bash
script_dir="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
# https://www.sqlprostudio.com/blog/8-killing-cancelling-a-long-running-postgres-query
"$script_dir/psql" -c "SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' and pid <> pg_backend_pid();" "$@"
Save the psql database state as per stackoverflow.com/questions/37984733/postgresql-database-export-to-sql-file with our web/bin/pg_dump helper:
web/bin/pg_dump tmp.dump
Then to restore it later with web/bin/pg_restore:
web/bin/pg_restore tmp.dump
stackoverflow.com/questions/2732474/restore-a-postgres-backup-file-using-the-command-line
Helper that gives psql PostgreSQL shell on the default database (ourbigbook).
To select another database use the -d option: E.g. to use the ourbigbook_test database from OurBigBook Web run unit tests in PostgreSQL:
bin/psql -d ourbigbook_test
psql just forwards everything to the underlying psql command, so you can e.g. run a SQL script stored in a file with:
bin/psql <tmp.sql
or run an SQL query from CLI with:
bin/psql -c 'select * from "Id"'
web/bin/psql
#!/usr/bin/env bash
db=ourbigbook
args=()
while [ $# -gt 0 ]; do
  case "$1" in
    -d)
      db="$2"
      shift 2
      ;;
    *)
      args+=("$1")
      shift
      ;;
  esac
done
PGPASSWORD=a psql -U ourbigbook_user -h localhost "$db" "${args[@]}"
List all queries that are a currently running on PostgreSQL database.
Useful in the sad cases that our recursive queries go infinite due to bugs.
web/bin/pg-ls-queries
#!/usr/bin/env bash
script_dir="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
# https://stackoverflow.com/questions/12641676/how-to-get-a-status-of-a-running-query-in-postgresql-database/44211767#44211767
"$script_dir/psql" -c "SELECT datname, pid, state, query, age(clock_timestamp(), query_start) AS age 
FROM pg_stat_activity
WHERE state <> 'idle' AND state <> 'idle in transaction'
  AND query NOT LIKE '% FROM pg_stat_activity %'
ORDER BY age" "$@"
When developing the backend only, Next.js adds several seconds to the debug loop. This is a life saver in that case:
npm run dev-pg-back
Put a debugger statement where you want to break and run:
npm run devi
where i stands for inspect as in node inspect.
This pauses at the start of execution. So just run c and normal execution resumes until the debugger; statement is reached.

7.2.8. OurBigBook Web unit tests

words: 594 articles: 2
All our tests are all located inside test.js.
They can be run with:
cd web
npm test
The dynamic website tests also uses Mocha just like the tests for OurBigBook CLI and OurBigBook Library, so similar usage patterns apply, e.g. to run just a single test:
npm test -- -g 'substring of test title'
or to show database queries being done in the tests:
DEBUG='*:sql:*' npm test
The tests include two broad classes of tests:
To run the tests on PostgreSQL instead of the default SQLite, first setup the test database analogously to local run as identical to deployment as possible:
cd web
bin/pg-setup ourbigbook_test
and then run with:
npm run test-pg
Run only matching tests on PostgreSQL:
npm run test-pg -- -g 'substring of test title'
Running tests erases all data present in the database used. In order to point to a custom database use:
DATABASE_URL_TEST=postgres://realworld_next_user:a@localhost:5432/realworld_next_test npm run test-pg
We don't use DATABASE_URL when running tests as a safeguard to reduce the likelihood of accidentally nuking the production database.
The test database contains the state of the latest test run at the end of the run. You can inspect it with web/bin/psql with:
bin/psql -d ourbigbook_test
By default, we don't make any requests to Next.js, because starting up Next.js is extremelly slow for regular test usage and would drive us crazy.
In regular OurBigBook Web usage through a browser, Next.js handles all GET requests for us, and the API only handles the other modifying methods like POST.
However, we are trying to keep the API working equally well for GET, and as factored out with Next.js as possible, so just testing the API GET already gives reasonable coverage.
But testing Next.js requests before deployment is a must, and is already done by default by npm run deploy-prod from Heroku deployment, and can be done manually with:
npm run test-next
or e.g. to run just a single test:
npm run test-next -- -g 'api: create an article and see it on global feed'
for for Postgres:
npm run test-pg-next
These tests are currently very basic, and only check page status. In the future, we can
  • add some HTML parsing to check for page contents as a reponse to GET, just as we already do in the test system of the OurBigBook Library
  • go all in an use a JavaScript enabled test system like Selenium to also test login and data modification from the browser
If you are not making any changes to the website itself, e.g. only to the test system, then you can skip the slow rebuild with:
test-next-nobuild
test-pg-next-nobuild
Note that annoyingly, Next.js reuses the same forlder for dev and build runs, so you have to quit your dev server for this to work, otherwise the dev server just keeps writing into the folder and messing up the production build test.
Note that Next.js tests are just present inside other tests, e.g. api: create an article and see it on global feed also tests some stuff when not testing Next.js. Running npm run test-next simply enables the Next.js tests on top of the non Next.js ones that get run by default.
These tests can only be run in production mode, and so our scripts automatically rebuild every time before running the tests, which makes things quite slow. This required because in development mode, Next.js is extremelly soft, and e.g. does not raise 500 instead returning a 200 page with error messages. Bad default.
TypeScript type checking of OurBigBook Web is run automatically during build, e.g. by:
npm run build-dev
as mentioned at local optimized frontend.
To speed up the development loop further, you can run just the TypeScript type checking with:
cd web
nmp run typecheck
The output format is also a bit nicer that what is shown in npm run build-dev.
If set to OurBigBook environment variable true enables somewhat verbose logs of several key performance points, notably in conversion.

7.2.11. OurBigBook Web deployment

words: 2k articles: 21
Each user has an admin property which when set to true allows the user to basically view and change anything for themselves and other users. E.g. admins can see private data of any user such as emails, or modify users usernames.
Some actions are not possible currently because they were originally hardcoded for "do action for the current user" rather than "do action for target user", but all of those are intended to be converted. E.g. that is currently the case for like/unlike, follow/unfollow from the API.
In order to mark a user as admin, direct DB acceess is required.
For example, to make user barack-obama admin on a development run the web/bin/make-admin script:
web/bin/make-admin barack-obama
Admin priviledges can be revoked with the -f (--false) flag:
web/bin/make-admin -f barack-obama
The same command works in a Heroku deployment where you can run:
heroku run -a ourbigbook web/bin/make-admin -f barack-obama
7.2.11.2. OurBigBook Web database
words: 691 articles: 8
We currently have some intentional denormalization in our database e.g.:
  • counts such as: user reputation, article issue and follower counts, issue comment and follower counts
  • nested sets
These dernormalizations are not ideal, but they make things a bit easier, and some of them are almost certainly faster.
To keep things slightly saner, the web/bin/normalize script can be used to view, check and update dernormalized data.
The "nested set index" is an index explicitly maintained by our codebase that allows quickly fetching pages for OurBigBook Web dynamic article tree in pre-order depth first, i.e. the conventional order in which the table of contents and articlees appear in a book. See also: stackoverflow.com/questions/4048151/what-are-the-options-for-storing-hierarchical-data-in-a-relational-database
This technique is also called "closure table" by some authors.
This index is, as the name indicates, an index, i.e. it duplicates information otherwise present in the OurBigBook Web Ref database table, which contains an adjacency list format instead, in the hope that it would be faster to pre-order depth first traverse.
This feature adds considerable complexity to the codebase. Also, updates can be considerably slow, as updating this index for a single article requires updating the index value for most or all other articles as well. We should bechmark it better vs recursive queries.
This index was partly introduced as a helper rather than as a pure speed up, as it is a bit hard to do pre order tree traversal in SQLite due to the lack of arrays. In PostgreSQL we can do it well: stackoverflow.com/questions/65247873/preorder-tree-traversal-using-recursive-ctes-in-sql/77276675#77276675
Any pending migrations are done automatically during deployment as part of npm run build, more precisely they are run from web/bin/sync-db.js.
We also have a custom setup where, if the database is not initialized, we first:
  • just creates the database from the latest model descriptions
  • manually fill in the SequelizeMeta migration tracking table with all available migrations to tell Sequelize that all migrations have been done up to this point
This is something that should be merged into Sequelize itself, or at least asked on Stack Overflow, but lazy now.
In order to test migrations locally interactively, you can:
  • commit them on Git
  • git checkout HEAD~
  • reset the database with demo data:
    cd web
    ./bin/generate-demo-data.js --clear
  • Move back to master: git checkout -
  • Run the migration:
    ./bin/sync-db.js
Since sequelize migrations are so hard to get right, it is fundamental to test them.
One way to do it is with our web/bin/test-migration script:
cd web
bin/test-migration -u1 -a3
For PostgresQL:
bin/pg bin/test-migration -u1 -a3
Note that Sequelize SQLite migrations are basically worthless and often incorrectly fail due to foreign key constraints: stackoverflow.com/questions/62667269/sequelize-js-how-do-we-change-column-type-in-migration/70486686#70486686 so you might not care much about making them pass and focus only PostgreSQL.
The test-migration script:
  • does a git checkout out to the previous commit
  • regenerates the database
  • checks out to master
  • and then does the migration
The arguments of test-migration are fowarded to web/bin/generate-demo-data.js from demo data, -u1 -a5 would produce a small ammount of data, suitable for quick iteration tests.
Towards the end of that script, we can see lines of type:
+ diff -u tmp.old.sqlite3.sort.sql tmp.new-clean.sqlite3.sort.sql
+ diff -u tmp.new-clean.sqlite3.sort.sql tmp.new-migration.sqlite3.sort.sql
Those are important diffs you might want to look at every time:
  • tmp.old.sqlite3.sort.sql: old schema before migration, but with lines sorted alphabetically
  • tmp.new-clean.sqlite3.sort.sql: new schema achieved by dropping the database and re creating at once
  • tmp.new-migration.sqlite3.sort.sql: new schema achieved migrating from the old state
Therefore, you really want the diff tmp.old.sqlite3.schema tmp.new-clean.sqlite3.schema to be empty. For sqlite3 we actually check that and give an error if they differ, but for PostgreSQL it is a bit harder due to the multiline statements, so just inspect the diffs manually.
When quickly developing before we had any users, a reasonable way is to nuke the database everytime instead of spending time writing migrations. To do this, you can without creating a migration:
npm run deploy-prod
This breaks the website, because the DB is out of sync. So then you go and manually fix it up:
# heroku run -a ourbigbook web/bin/generate-demo-data.js --force-production --clear
Some hacks for those that have DB access.
Change dates of all articles by a given user to a specific date:
select "Article"."updatedAt" from "Article" inner join "File" on "Article"."fileId" = "File".id inner join "User" on "File"."authorId" = "User"."id" and "User".username = 'barack-obama';
OurBigBook is currently hardcoded to send emails with Sendgrid. That provider was very easy to get started with, and has a free plan suitable for testing. Setup is described at: OurBigBook Web email sending with Sendgrid. Patches supporting other providers in a configurable way are welcome.
In development mode, emails are all logged to the server stdout and not actually sent, unless you run as:
OURBIGBOOK_SEND_EMAIL=1 npm run dev
This can be used to test the email integration locally.
Some research of different methods is shown at: cirosantilli.com/send-free-emails-from-heroku
Go to www.google.com/recaptcha/about/, setup a new domain, and save the values given e.g. to Heroku for Heroku deployment:
heroku config:set -a ourbigbook RECAPTCHA_SECRET_KEY=secret_key
heroku config:set -a ourbigbook NEXT_PUBLIC_RECAPTCHA_SITE_KEY=site_key
Aditionally, also setup a separate localhost reCAPTCHA to test that it is working:
echo RECAPTCHA_SECRET_KEY=secret_localhost_key >> web/.env
echo NEXT_PUBLIC_RECAPTCHA_SITE_KEY=site_localhost_key >> web/.env
and then to use the .env file run with:
cd web
env $(cat .env | xargs) npm run dev
Although it is possible to use a single reCAPTCHA for both production and development, Google recommends having separate ones.
If the NEXT_PUBLIC_RECAPTCHA_SITE_KEY variable is not set, then reCAPTCHA is simply not used in the website.
7.2.11.5. Heroku deployment
words: 1k articles: 7
Got it running perfectly at as of April 2021 ourbigbook.com with the following steps.
Initial setup for a Heroku project called ourbigbook:
sudo snap install --classic heroku
heroku login
heroku git:remote -a ourbigbook
git remote rename heroku prod
# Automatically sets DATABASE_URL.
heroku addons:create -a ourbigbook heroku-postgresql:hobby-dev
# We need this to be able to require("ourbigbook")
heroku config:set -a ourbigbook SECRET="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 256)"
# Password of users generated with ./web/bin/generate-demo-data
heroku config:set -a ourbigbook OURBIGBOOK_DEMO_USER_PASSWORD="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 20)"
# You can get it later to login with the demo users from the Heroku web interface
To finish things off, you must now:
Additionally, you also need to setup the PostgreSQL test database for both OurBigBook CLI and OurBigBook Web as documented at Section 7.2.5.1. "OurBigBook Web PostgreSQL setup":
web/bin/pg-setup ourbigbook-cli
Then deploy with:
cd web
npm run deploy-prod
Get an interactive shell on the production server:
./heroku run bash
From there you could then for example update the demo data with:
cd web
bin/generate-demo-data.js --force-production
This should in theory not affect any real user data, only the demo articles and users, so it might be safe. In theory!
Alternatively, we could do this at once with;
./heroku run web/bin/generate-demo-data.js --force-production
Drop into a PostgreSQL shell on production:
./heroku psql
Of course, any writes could mean loss of user data!
Run a query directly from your terminal:
./heroku psql -c 'SELECT username,email FROM "User" ORDER BY "createdAt" DESC LIMIT 50'
If some spurious bugs crashes the server, you might want to restart it with:
./heroku restart
The heroku helper allows us to omit the boring -a ourbigbook, e.g. we can just type:
./heroku logs -f
instead of:
./heroku logs -a ourbigbook -f
heroku
#!/usr/bin/env bash
# https://docs.ourbigbook.com/file/heroku
set -eu
cmd="$1"
shift
heroku "$cmd" -a ourbigbook "$@"
The 10k rows of the free plan are easy to reach, this procedure can be used to upgrade:
The domain ourbigbook.com was leased from: porkbun.com/. Unfortunately, HTTPS on Heroku with a custom domain requires using a paying tier, so we upgraded from the free tier to the cheapest paid tier, "Hobby Project", to start with: stackoverflow.com/questions/52185560/heroku-set-ssl-certificates-on-free-plan
On the Porkbun web UI, we added a DNS record of type :
ALIAS ourbigbook.com <heroku-id>.herokudns.com
where heroku-id was obtained from:
heroku domains:add ourbigbook.com
heroku domains
and we removed all other ALIAS/CNAME records from Porkbun.
Next, we setup forwarding from ciro@ourbigbook.com to Ciro Santilli's personal gmail account. This is done in part because it appears that we are required to provide a from address for OurBigBook Web email sending with Sendgrid, and that email has to be verified. Having Porkbun host it costs 2$/month, and we are trying to use as much free stuff as possible until there are actual users on the website.
Note that if you try to test from your own personal account, the redirect automatically skips sending as it notices that it would redirect to the sender. To test it you have to use some secondary email account instead.
Before pushing any new changes, and especially ones that seem dangerous, it is a good idea to first deploy to a staging server.
We have a staging server running at: ourbigbook-staging.herokuapp.com/
To set it up, we just follow the exact same steps as for Heroku deployment but with a different app ID. E.g. using the ourbigbook-staging heroku project ID:
git remote add staging https://git.heroku.com/ourbigbook-staging.git
heroku addons:create -a ourbigbook-staging --confirm ourbigbook-staging heroku-postgresql:hobby-dev
heroku config:set -a ourbigbook-staging SECRET="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 256)"
npm run deploy-staging
To copy the main database in staging we can follow the instructions at: stackoverflow.com/questions/10673630/how-do-i-transfer-production-database-to-staging-on-heroku-using-pgbackups-gett Considering a production Heroku app ID of ourbigbook:
heroku maintenance:on -a ourbigbook-staging &&
heroku pg:copy ourbigbook::DATABASE_URL DATABASE_URL -a ourbigbook-staging &&
heroku maintenance:off -a ourbigbook-staging
To get a shell on the stating server you can run:
heroku run -a ourbigbook-staging bash
7.2.11.5.5. Heroku debugging
words: 171 articles: 1
To log database queries you can run:
./heroku config:set DEBUG='*:sql:*'
You then then see them with other logs at:
./heroku logs -t
Disable these verbose logs once you're done:
./heroku config:unset DEBUG
First download a dump of the database as per devcenter.heroku.com/articles/heroku-postgres-import-export with web/bin/pg_dump_heroku:
web/bin/pg_dump_heroku
This produces a file latest.dump with the database dump. If that already exists, it gets overwritten.
Restoring that database locally to reproduce bugs can be done with the helper web/bin/pg_restore_heroku_local:
web/bin/pg_restore_heroku_local
Restoring that local database dump to Heroku when reverting back isuses can be done with the helper web/bin/pg_restore_heroku_remote:
web/bin/pg_restore_heroku_remote
This will then ask you to type an interactive confirmation which we have not disabled by default.
That helper restore the local latest.dump database file like in Section 7.2.5.5.2. "Save and restore local PostgreSQL development database". First we nuke the database completely with web/bin/pg-setup to increase accuracy:
web/bin/pg-setup
web/bin/pg_restore --no-acl --no-owner latest.dump
We also add some extra flags to reduce the ammount of warnings and errors due to database differences. The command does not exit with status 0. devcenter.heroku.com/articles/heroku-postgres-import-export says some of those warnings are normal and can be ignored.
On the toplevel we have:
  • .: OurBigBook package
  • web/: OurBigBook Web package that depends on the local OurBigBook package through relative path ..
    Every require outside of web/ must be relative, except for executables such as ourbigbook or demos such as lib_hello.js, or else the deployment will break.
    This is because we don't know of a super clean way of adding the toplevel ourbigbook package to the search path as npm run link does not work well on Heroku.
    A known workaround to allow npm run build-assets is done at: web/build.sh.
Currently, Heroku deployment does the following:
  • install both dependencies and devDependencies
  • npm run build
  • remove devDependencies from the final output to save space and speed some things up
    The devDependencies should therefore only contain things which are needed for the build, typically asset compressors like Webpack, but not components that are required at runtime.
This setup creates some conflict between what we want for OurBigBook command line users, and Heroku deployment.
Notably, OurBigBook command line users will want SQLite, and Heroku never, and SQLite installation is quite slow.
Since we were unable to find any way to make things more flexible on the package.json with some kind of optional depenency, for now we are just hacking out any dependencies that we don't want Heroku to install at all from package.json and web/package.json with sed rom heroku-prebuild.
Further discussion at: github.com/ourbigbook/ourbigbook/issues/156
TODO new world to me.
stackoverflow.com/questions/18215389/how-do-i-measure-request-and-response-times-at-once-using-curl is a useful one if the server is slow:
curl -o /dev/null -s -w 'Establish Connection: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n' http://localhost:3000
As documented at nextjs.org/docs/advanced-features/measuring-performance by uncommenting the following lines under web/pages/_app.tsx:
//export function reportWebVitals(metric) {
//  console.log(metric)
//}
we see a few metrics printed on the browser such as:
{
    "id": "1667591558646-5510113745482",
    "name": "TTFB",
    "startTime": 0,
    "value": 2474.199999999255,
    "label": "web-vital"
}
{
    "id": "1667591558646-9119282792899",
    "name": "LCP",
    "startTime": 4982.099,
    "value": 4982.099,
    "label": "web-vital"
}
For a general introduction to CSRF see: security.stackexchange.com/questions/8264/why-is-the-same-origin-policy-so-important/72569#72569
CSRF security is organized as follows:
  • unsafe methods such as POST are all authenticated by JWT. This authentication comes from headers that can only be sent via JavaScript, so it is not possible to make users click links that will take those actions
  • safe methods such as GET are authenticated by a cookie. The cookie has the same value as the JWT. It is possible for third party websites to make such authenticated requests, but it doesn't matter as they will not alter the server state, and contents cannot be read back due to the single origin policy.
    There is currently one exception to this: the verification page, which has side effects based on GET. But it shouldn't matter in that specific case.
The JWT token is only given to users after account verification. Having the JWT token is the definition of being logged in.
This section describes rules for normally browser-visible URLs of the website. These rules do not apply to the Web API, see OurBigBook Web API standards for Web API URL standards.
It should be impossible to have upper case characters on any URL of the website. Words should be separated by hyphens - instead.
Use the usual gramatical ordering for action object pairs, e.g.:
  • new-discussion
  • edit-discussion
instead of:
  • discussion-new
  • discussion-edit
The latter is tempting to groupr all "Discussion" actions under a prefix, but let's use the nice grammar instead.
Next.js imposes one constraint: ISR only works with URL parameters like /articles/<page>, not GET parameters like /articles?page=1.
As of writing however, we don't use any ISR as it adds a lot of complication. But still, we are trying to stick to the general principle that if something might ever be ISR'ed in the future, then we would like to keep it as parameter rather then GET. It feels sane.
The only things that we are ever consider ISR'ing are the pre-rendered version of articles and issues, excluding any metadata of those that changes often or depends on logged in users.
All lists of things will never be ISR'ed, as those can change constantly. One conclusion of this is that:
  • page number
  • ordering
  • other search-like parameters
which appear only in lists of things, will always be part of the GET query, and not params.
It is a bit annoying that due to scopes being separated with /, we always have to put article names last in any URL (outside GET parameters) to avoid ambiguities. E.g. it would be arguably nicer to have:
/go/donald-trump/linear-algebra/issues
rather than the current:
/go/issues/donald-trump/linear-algebra
but this produces ambiguity, what if user issues has an article with title Linear algebra under scope donald-trump?
In the API article slugs are always passed as a GET parameter, unlike in the case of browser visible URLs. This is because we don't care about having:
  • nice human readable URLs
  • ISR, at least for now
so the id= parameter is always used.
For now there is no API that returns single items: getting a single item is done simply using a filter that uniquely selects a single element, e.g.:
/api/articles?id=johnsmith/mathematics
Maybe this will change if someday we ever de to have full vs minimized versions of API objects. But then at that point we might as well go to GraphQL.
Types:
  • booleans are true or false
7.2.13.6.1. Web CLI utils
words: 204 articles: 3
Check the counts of issues per article for user barack-obama only with -c, but don't fix anything:
web/bin/normalize -c -u barack-obama article-issue-count
Print the full correct normalized state with -p:
web/bin/normalize -f -u barack-obama issue-follower-count
Fix the counts of issue follower if any are wrong with -f, thus potentially altering the database:
web/bin/normalize -f -u barack-obama ssue-follower-count
web/bin/normalize
#!/usr/bin/env node
// https://docs.ourbigbook.com/file/web/bin/normalize

const path = require('path')

const commander = require('commander')

const models = require('../models')

// main
const program = commander.program

program.description('View, check or update (i.e. normalize redundant database data: https://docs.ourbigbook.com/ourbigbook-web-dynamic-article-tree https://docs.ourbigbook.com/_file/web/bin/normalize')
program.option('-c, --check', 'check if something is up-to-date', false);
program.option('-f, --fix', 'fix before printing', false);
program.option('-p, --print', 'print the final state after any update if any', false);
program.option(
  '-u, --username <username>',
  'which user to check or fix for. If not given do it for all users. Can be given multiple times.',
  (value, previous) => previous.concat([value]),
  [],
);
program.parse(process.argv);
const opts = program.opts()
const whats = program.args
const sequelize = models.getSequelize(path.dirname(__dirname))
;(async () => {
  await models.normalize({
    check: opts.check,
    fix: opts.fix,
    log: true,
    print: opts.print,
    sequelize,
    usernames: opts.username,
    whats,
  })
})().finally(() => { return sequelize.close() });
Rerender all articles by all users:
web/bin/rerender-articles.js
Rerender only the articles with specified slugs:
web/bin/rerender-articles.js johnsmith/mathematics maryjane/physics
Rerendering has to be done to see updates on OurBigBook changes that change the render output.
Notably, this would be mandatory in case of CSS changes that require corresponding HTML changes.
As the website grows, we will likely need to do a lazy version of this that marks pages as outdated, and then renders on the fly, plus a background thread that always updates outdated pages.
The functionality of this script should be called from a migration whenever such HTML changes are required. TODO link to an example. We had one at web/migrations/20220321000000-output-update-ancestor.js that seemed to work, but lost it. It was simple though. Just you have to instantiate your own Sequelize instance after making the model change to move any data.
web/bin/rerender-articles.js
#!/usr/bin/env node

const path = require('path')

const commander = require('commander');

const models = require('../models')
const back_js = require('../back/js')

const program = commander.program
program.description('Re-render articles https://docs.ourbigbook.com/_file/web/bin/rerender-articles.js')
program.option('-a, --author <username>', 'only convert articles by this authoru', undefined);
program.option('-i, --ignore-errors', 'ignore errors', false);
program.parse(process.argv);
const opts = program.opts()
const slugs = program.args
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
await sequelize.models.Article.rerender({
  log: true,
  convertOptionsExtra: { katex_macros: back_js.preloadKatex() },
  author: opts.author,
  ignoreErrors: opts.ignoreErrors,
  slugs,
})
})().finally(() => { return sequelize.close() });
Analogous to web/bin/rerender-articles.js but for issues.
web/bin/rerender-issues.js
#!/usr/bin/env node

const path = require('path')

const commander = require('commander');

const models = require('../models')
const back_js = require('../back/js')

const program = commander.program
program.description('Re-render issues https://docs.ourbigbook.com/_file/web/bin/rerender-issues.js')
program.option('-i, --ignore-errors', 'ignore errors', false);
program.parse(process.argv);
const opts = program.opts()
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
await sequelize.models.Issue.rerender({
  log: true,
  convertOptionsExtra: { katex_macros: back_js.preloadKatex() },
  ignoreErrors: opts.ignoreErrors
})
})().finally(() => { return sequelize.close() });

8. Runtime feature

words: 185 articles: 1
This section describes features present in ourbigbook_runtime.js
That file contains JavaScript functionality to be included in the final documents to enable interactive document features such as the table of contents.
You should use the packaged _obb/ourbigbook_runtime.js instead of this file directly however.
When you have a document like:
animal.bigb
= Animal

== Dog

=== Poodle
the version without -S, --split-headers will contains a valid ID within it:
animal.html#poodle
However, if at some point you decide that the section dog has become too large and want to split it as:
= Animal

\Include[dog]
and:
dog.bigb
= Dog

== Poodle
When you do this, it would break liks that users might have shared to animal.html#poodle, which is not located at dog.html#poodle.
To make that less worse, if -S, --split-headers are enabled, we check at runtime if the ID poodle is present in the output, and if it is not, we redirect to the split page #poodle to poodle.html.
It would be even more awesome if we were able to redirect to the non-split version as well, dog.html#poodle, but that would be harder to implement, so not doing it for now.

9. Tooling

words: 1k articles: 7
Unlike all languages which rely on ad-hoc tooling, we will support every single tool that is required and feasible to be in this repository in this repository, in a centralized manner.
The only thing we have for now is the quick and dirty adoc-to-ciro.
The better approach would be to implement a converter in Haskell from anything to OurBigBook.
And from OurBigBook to anything, create new output formats inside OurBigBook to those other formats.

9.2. Editor support

words: 863 articles: 3
VS Code is intended to be the best supported OurBigBook editor.
The official OurBigBook extension is published at: marketplace.visualstudio.com/items?itemName=ourbigbook.ourbigbook-vscode by the publisher account: marketplace.visualstudio.com/publishers/ourbigbook. Its source code is located under vscode.
Currently, the extension it only supports syntax highlighting, but we want to support everything you would expect from proper programming language support, notably:
  • deployment as static website or to OurBigBook Web
  • build with click to line stdout error parsing
  • jump to ID definitions
  • HTML preview, maybe we should learn from the Asciidoc extension
Historically, Vim support came first and was better developed. But that was just an ad-hoc path of least resistance, VS Code is the one we are going to actually support moving forward.
Our syntax highlighting attempts mostly to follow the official HTML style, which is perhaps the best maintained data-language. We have also had a look at the LaTeX, Markdown and Asciidoctor ones for refernce.
One current fundamental limitation of VS Code is that there is no way to preview images and mathematics inline with text: stackoverflow.com/questions/52709966/vscode-is-it-possible-to-show-an-image-inside-a-text-buffer If only it were able to do that, it would go a long way to being as good as a WYSIWYG interface.
Video 16. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the VS Code extension.

9.2.2. Vim

words: 425
You can install the support with Vundle with:
set nocompatible
filetype off
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
Plugin 'gmarik/vundle'
let g:snipMate = {}
let g:snipMate.snippet_version = 1
Plugin 'MarcWeber/vim-addon-mw-utils'
Plugin 'tomtom/tlib_vim'
Plugin 'garbas/vim-snipmate'
Plugin 'ourbigbook/ourbigbook', {'rtp': 'vim'}
or by directly dropping the files named below under your ~/.vim/, e.g. vim/syntax/ourbigbook.vim
The following support is provided:
  • vim/syntax/ourbigbook.vim: syntax highlighting.
    As for other programming languages, this cannot be perfect without actually parsing, which would be slow for larger files.
    But even the imperfect approximation already covers a lot of the most cases.
    Notably it turns off spelling from parts of the document like URLs and code which would otherwise contain many false positive spelling errors.
  • vim/snippets/ourbigbook.snippets: snippets for github.com/honza/vim-snippets, which you also have to install first for them to work.
    For example, with those snippets installed, you can easily create links to headers. Suppose you have:
    = My long header
    To create an cross reference to it you can:
    and it will automatically expand to:
    \x[my-long-header]
    This provides a reasonable alternative for ID calculation, until a ctags-like setup gets implemented (never/browser editor with preview-only? ;-))
    Similarly for \H parent argument you can do:
    {p
    would expand to:
    {parent=my-long-header}
    Syntax highlighting can likely never be perfect without a full parser (which is slow), but even the imperfect approximate setup (as provided for most other languages) is already a huge usability improvement.
    We will attempt to err on the side of "misses some stuff but does not destroy the entire page below" whenever possible.
  • mappings:
    • <leader>f, which usually means ,f (comma then F): start searching for a header in the current file. Does a regular / search without opening any windows, to is it very ligthweight. Mnemonic: "Find".
    • <leader>h (requires Fugitive to be installed): sets up the ObbGitGrep command, which searches for header across all git tracked files in the current Git repository. After ,g you are left in the prompt with:
      ObbGitGrep
      so if you complete that by:
      ObbGitGrep animal kingdom
      it will match headers that start with animal kingdoom case insentively, e.g.:
      = Animal kingdom tree
      = Animal kingdom book
      Vim regular expressions are accepted, e.g. if you don't want it to start with the search pattern:
      ObbGitGrep .*animal kingdom
      The command opens a new tab (technically a "Vim error window") containing all matches, where you can click Enter to open one of them.
      Mnemonic: "Header search".
A simple way to develop is to edit the Vundle repository directly under ~/.vim/bundle/ourbigbook.
There are two versions of this editor:
  • editor.html is a toy/demo with no backing database.
    That editor can be viewed directly locally with:
    git clone https://github.com/ourbigbook/ourbigbook
    cd ourbigbook
    npm install
    npm run build-assets
    chrome editor.html
    It also appears at docs.ourbigbook.com/editor hosted simply under GitHub pages.
  • a similar looking editor will also appear on the OurBigBook Web, but this time linked to the database.
    That more advanced editor will actually save results back to the database, and show allow preview of features such as linking to headers of other pages.
Issues for the editor are being tracked under: github.com/ourbigbook/ourbigbook/labels/editor
We must achieve an editor setup with synchronized live side-by-side preview.
Likely, we will first do a non WYSIWYG editor with side by side preview with scroll sync.
Then, if the project picks up steam, we can start considering a full WYSIWYG.
It would be amazing to have a WebKit interface that works both on browser for the and locally.
Possibilities we could reuse:

9.3. Error reporting

words: 362 articles: 1
A lot of effort has been put into making error reporting as good as possible in OurBigBook, to allow authors to quickly find what is wrong with their source code.
Error reporting is for example tested with assert_error tests in test.js.
Please report any error reporting bug you find, as it will be seriously tracked under the: error-reporting label.
Notably, OurBigBook should never throw an exception due to a syntax error, as that prevents error messages from being output at all.
One important philosophy of the error reporting is that the very first message should be the root cause of the problem whenever possible: users should not be forced to search a hundred messages to find the root cause. In this way, the procedure:
  • solve the first error
  • reconvert
  • solving the new first error
  • reconvert
  • etc.
should always deterministically lead to a resolution of all problems.
Error messages are normally sorted by file, line and column, regardless of which conversion stage they happened (e.g. a tokeniser error first gets reported before a parser error).
There is however one important exception to that: broken cross references are always reported last.
For example, consider the following syntactically wrong document:
= a

\x[b]

``
== b
Here we have an unterminated code block at line 5.
However, this unterminated code block leads the header b not to be seen, and therefore the reference \x[b] on line 3 to fail.
Therefore, if we sorted naively by line, the broken reference would shoe up first:
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:5:1: unterminated literal argument
But in a big document, this case could lead to hundreds of undefined references to show up before the actual root unterminated literal problem.:
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:4:3: cross reference to unknown id: "b"
error: tmp.bigb:5:3: cross reference to unknown id: "b"
...
error: tmp.bigb:1000:1: unterminated literal argument
Therefore, we force undefined references to show up last to prevent this common problem:
error: tmp.bigb:1000:1: unterminated literal argument
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:4:3: cross reference to unknown id: "b"
error: tmp.bigb:5:3: cross reference to unknown id: "b"
...

10. Security

words: 206 articles: 2
OurBigBook is designed to not allow arbitrary code execution by default on any OurBigBook CLI command.
This means that it it should be safe to just download any untrusted OurBigBook repository, and convert it with OurBigBook CLI, even if you don't trust its author.
In order to allow code execution for pre/post processing tasks e.g. from prepublish, use the --unsafe-ace option.
Note however that you have to be careful in general, since e.g. a malicious author could create a package with their own malicious version of the ourbigbook executable, that you could unknowingly run with with the standard npx ourbigbook execution.
OurBigBook HTML output is designed to be XSS safe by default, any non-XSS safe constructs must be enabled with a non-default flag or setting, see: unsafe-xss.
Of course, we are walking on eggs, and this is hard to assert, so the best thing to do later on will be to parse the output e.g. with DOMParser to ensure that it is valid and does not contain any script tags, but it is not as simple as that: stackoverflow.com/questions/37435077/execute-javascript-for-xss-without-script-tags/61588322#61588322
XSS unsafe constructs lead to errors by default. XSS unsafe constructs can be allowed from the command line with:
./ourbigbook --unsafe-xss
or from the ourbigbook.json file with an entry of form:
"unsafe-xss": true

11. Contact

words: 11 articles: 1
github.com/ourbigbook/ourbigbook/issues

12. Developing OurBigBook

words: 8k articles: 67

12.1. Run OurBigBook master

words: 275 articles: 1
Install master globally on your machine:
git clone https://github.com/ourbigbook/ourbigbook
cd ourbigbook
npm link
npm link ourbigbook
npm run build-assets
so you can now run the ourbigbook command from any directory in your computer, for example to convert the ourbigbook documentation itself:
ourbigbook .
Note that this repository uses outputOutOfTree, and so the output will be present at out/html/index.html rather than index.html.
We also have a shortcut for npm link and npm link ourbigbook:
npm run link
npm run link produces symlinks so that any changes made to the Git source tree will automatically be visible globally, see also: stackoverflow.com/questions/28440893/install-a-locally-developed-npm-package-globally The symlink structure looks like:
/home/ciro/ourbigbook/node_modules/ourbigbook -> /home/ciro/.nvm/versions/node/v14.17.0/lib/node_modules/ourbigbook -> /home/ciro/ourbigbook
As mentioned at useless knowledge, most users don't want global installations of OurBigBook. But this can be handy during development, as you can immediately see how your changes to OurBigBook source code affect your complex example of interest. For example, Ciro developed a lot of OurBigBook by hacking github.com/cirosantilli/cirosantilli.github.io directly with OurBigBook master.
Just remember that if you add a new dependency, you must redo the symlinking business:
npm install <dependency>
npm run link
Asked if there is a better way at: stackoverflow.com/questions/59389027/how-to-interactively-test-the-executable-of-an-npm-node-js-package-during-develo. The symlink business can be undone with:
npm unlink
rm node_modules/ourbigbook
Run OurBigBook master mentions how to install and then run OurBigBook master globally, which is useful build some projects locally on master.
To instead install locally in the current directory only instead, which can be useful for bisection:
npm install
ln -s .. node_modules/ourbigbook
npm run build-assets
You can now run tests as:
npm test
or the executable interactively as:
./ourbigbook .
It also works from a subdirectory:
mkdir -p tmp
cd tmp
../ourbigbook .

12.2. Test system

words: 797 articles: 3
Run all tests:
npm test
To run all tests on PostgreSQL as in the OurBigBook Web, first setup the PostgreSQL database similarly to local run as identical to deployment as possible:
createdb ourbigbook_cli
psql -c "CREATE ROLE ourbigbook_user with login password 'a'"
psql -c 'GRANT ALL PRIVILEGES ON DATABASE ourbigbook_cli TO ourbigbook_user'
psql -c 'GRANT ALL ON SCHEMA public TO ourbigbook_user'
psql -c 'GRANT USAGE ON SCHEMA public TO ourbigbook_user'
psql -c 'ALTER DATABASE ourbigbook_cli OWNER TO ourbigbook_user'
This got really annoying with PostgreSQL 15: stackoverflow.com/questions/67276391/why-am-i-getting-a-permission-denied-error-for-schema-public-on-pgadmin-4 And then run with:
npm run test-pg
List all tests:
node node_modules/mocha-list-tests/mocha-list-tests.js main.js
as per: stackoverflow.com/questions/41380137/list-all-mocha-tests-without-executing-them/58573986#58573986.
Run just one test by name:
npm test -- -g 'one paragraph'
or on PostgreSQL:
npm run test-pg -- -g 'one paragraph'
As per: stackoverflow.com/questions/10832031/how-to-run-a-single-test-with-mocha todo: what if the test name is a substring? You will want these Bash aliases:
npmtg() ( npm test -- -g "$*" )
npmtpg() ( npm run test-pg -- -g "$*" )
which allos you to just:
npmtg one paragraph
npmtpg one paragraph
Run all tests that don't start with cli::
npm test -- -g '^(?!cli:)'
This works because -g takes JavaScript regular expressions, so we can use negative lookahead, see also: stackoverflow.com/questions/26908288/with-mocha-how-do-i-run-all-tests-that-dont-have-slow-in-the-name
Suppose you selected a single test:
npm test -- -g 'cli: empty document'
and want to inspect the ID database database status.
On SQLite it is not currently possible as tests run on a temporary in-memory database. TODO create a way.
On PostgreSQL, you can just inspect the ourbigbook_cli table with the psql command line executable, e.g..
psql ourbigbook_cli -c 'select * from "Id"'
That table is used to run each test, and will contain the contents of the last test executed.
Step debug during a test run. Add the statement:
debugger;
to where you want to break in the code, and then run:
npm run testi -- -g 'p with id before'
where i in testi stands for inspect from node inspect. Also consider the alias:
npmtgi() ( npm run testi -- -g "$*" )
Note however that this does not work for tests that run the ourbigbook executable itself, since those spawn a separate process. TODO how to do it? Tried along:
const out = child_process.spawnSync('node', ['inspect', 'ourbigbook'].concat(options.args), {
  cwd: tmpdir,
  input: options.stdin,
  stdio: 'inherit',
});
but not working, related: stackoverflow.com/questions/23612087/gulp-target-to-debug-mocha-tests So for now, we are just printing the command being run as in:
cmd: cd out/test/executable-ourbigbook.json-outputOutOfTree && ourbigbook --split-headers .
so you can just re-run it manually with node inspect as in:
cd out/test/executable-ourbigbook.json-outputoutoftree && node inspect "../../../ourbigbook" --split-headers .
This works since the tmp directory is not deleted in case of failure.
There are two types of test in our test suite:
  • tests that call the ourbigbook.convert JavaScript API directly. These tests are prefixed with lib:
    These tests don't actually create files in the filesystem, and just mock the filesystem instead with a dictionary.
    Database access is not mocked however, we just use Sqlite's fantastic in-memory mode.
    Whenever possible, these tests check their results just from the abstract syntax tree tree returned by the API, which is cleaner than parsing the HTML. But sometimes HTML parsing is inevitable.
  • tests that call the ourbigbook executable itself:
    • their titles are prefixed with cli:
    • they tend to be a lot slower than the API test
    • can test functionality that is done outside of the ourbigbook.convert JavaScript API, notably stuff prevent in ourbigbook, so they are more end to end
    • don't do any mocking, and could therefore be more representative.
      However, as of 2022, we have basically eliminated all the hard database access mocking and are using the main database methods directly.
      So all that has to be mocked is basically stuff done in the ourbigbook executable itself.
      This means that except for more specific options, the key functionality of ourbigbook, which is to convert multiple paths, can be done very well in a non executable test.
      The only major difference is that instead of passing command line arguments like in ourbigbook . to convert multiple files in a directory, you have to use convert_before and convert_before_norender and specify conversion order one by one.
      This test robustness is new as of 2022, and many tests were previously written with executable that would now also work as unit tests, and we generally want that to be the case to make the tests go faster.
    • work by creating an actual physical filesystem under out/test/<normalized-test-title> with the OurBigBook files and other files like ourbigbook.json, and then running the executable on that directory.
      npm test first deletes the out/test directory before running the tests. After running, the generated files are kept so you can inspect them to help debug any issues.
    • all these tests check their results by parsing the HTML and searching for elements, since here we don't have access to the abstract syntax tree. It wouldn't be impossible to obtain it however, as it is likely already JSON serializable.
Source files:
  • index.js: main code. Must be able to run in the browser, so no Node.js specifics. Exposes the central convert function. You should normally use the packaged _obb/ourbigbook.js when using ourbigbook as an external dependency.
  • ourbigbook: CLI executable. Is basically just a CLI interface frontend to convert
  • test.js: contains all the Mocha tests, see also: test system
  • README.md: minimal Markdown README until GitHub / NPM support OurBigBook :-)
  • ourbigbook_runtime.js: runtime features
  • main.scss this file simply contains the customized CSS for docs.ourbigbook.com/ and does not get otherwise distributed with OurBigBook, see: CSS

12.3.1. Generated files

words: 278 articles: 2
dist/ contains fully embedded packaged versions that work on browsers as per common JavaScript package naming convention. All the following files are generated with Webpack with:
npm run webpack
which is called from npm run build-assets.
When publishing with OurBigBook CLI, dist is placed under the _obb directory.
The files in that directory are:
  • dist/ourbigbook.js: OurBigBook JavaScript API converter for browser usage. The source entry point for it is located at index.js. Contains the code of every single dependency used from node_modules used by index.js. This is notably used for the live-preview of a browser editor with preview.
  • dist/ourbigbook_runtime.js: similar dist/ourbigbook.js, but contains the converted output of ourbigbook_runtime.js. You should include this in every OurBigBook HTML output.
  • dist/ourbigbook.css: minimized CSS needed to view OurBigBook output as intended. Embeds all OurBigBook CSS dependencies, notably the KaTeX CSS without which mathematics displays as garbage. The Sass entry point for it is: ourbigbook.scss.
  • dist/editor.css: the CSS of the editor, rendered from editor.scss.
To develop these files, you absolutely want to use:
npm run webpack-dev
This runs Webpack in development mode, which has two huge advantages:
npm run webpack-dev also enables watch mode, so it keeps running until you turn it off.
This setup also works seamlessly when developing OurBigBook Web, just let the watch process run in a separate terminal.
When publishing with OurBigBook CLI, certain files such as the dist directory are placed under the _obb directory on the final output.
Being a reserved ID, we can safely dump any autogenerated files under _obb without fear of name conflicts with other files.
OurBigBook stores some metadata and outputs it generates/needs inside the ./out/ directory that it creates inside the --outdir <outdir>.
Overview of files it contains:
  • db.sqlite3: cross file reference internals
  • publish: a git clone of the source of the main repository to ensure that untracked files won't accidentally go into the output
    • publish/out/db.sqlite3: like out/db.sqlite3 but from the clean clone of out/publish
    • publish/out/publish: the final generated output directory that gets published, e.g. as in publish to GitHub Pages

12.4. Conversion process overview

words: 1k articles: 4
A conversion follows the following steps done for each file to be converted:
  • tokenizer. Reads the input and converts it to a linear list of tokens.
  • parser. Reads the list of tokens and converts it into an abstract syntax tree. Parse can be called multiple times recursively when doing things like.
  • ast post process pass 1.
    An ast post process pass takes abstract syntax tree that comes out of a previous step, e.g. the original parser output, and modifies the it tree to achieve various different functionalities.
    We may need iterate the tree multiple times to achieve all desired effects, at the time of writing it was done twice. Each iteration is called pass.
    You can view snapshots of the tree after each pass with the --log option:
    ourbigbook --log=ast-pp-simple input.bigb
    This first pass basically does few but very wide reacing operations that will determine what data we will have to fetch from the database during the followng DB queries step.
    It might also do some operations that are required for pass 2 but that don't necessarily fetch data, not sure anymore.
    E.g. this is where the following functionality are implemented:
  • ast post process pass 2: we now do every other post process operation that was not done in pass 1, e.g.:
    • insane paragraphs, lists and tables
  • ast post process pass 3: this does some minimal tree hierarchy linking between parents and children. TODO could it be merged into 2? Feels likely
  • render, which converts our AST tree into a output string. This is run once for the toplevel, and once for every header of the document if -S, --split-headers are enabled. We need to do this because header renders are different from their toplevel counterparts, e.g. their first paragraph has id p-1 and not p-283. All of those renders are done from the same parsed tree however, parsing happens only once.
    This step is skipped when using the --no-render option, or during ID extraction.
    TODO it is intended that it should not be possible for there to be rendering errors once the previous steps have concluded successfully. This is currently not the case for at least one known scenario however: cross references that are not defined.
    Sub-steps include:
    • DB queries: this is the first thing we do during the rendering step.
      Every single database query must be done at this point, in one go.
      Database queries are only done while rendering, never when parsing. The database is nothing but a cache for source file state, and this separation means that we can always cache input source state into the database during parsing without relying on the database itself, and thus preventing any circular dependencies from parsing to parsing.[ref]
      Keeping all queries together is fundamental for performance reasons, especially of browser editor with preview in the OurBigBook Web: imagine doing 100 scattered server queries:
      SELECT * from Ids WHERE id = '0'
      SELECT * from Ids WHERE id = '1'
      ...
      SELECT * from Ids WHERE id = '100'
      vs grouping them together:
      SELECT * from Ids WHERE id IN ('0', '1', ..., '100')
      It also has the benefit of allowing us to remove async/await from almost every single function in the code, which considerably slows down the CPU-bound execution path.
      As an added bonus, it also allows us to clearly see the impact of database queries when using --log perf.
      We call this joining up of small queries into big ones "query bundling".
  • at the every end of the conversion, we then save the database changes calculated during parsing and post processing back to the DB so that the conversion of other files will pick them up.
    Just like for the SELECT, we do a single large INSERT/UPDATE query per database to reduce the round trips.
Conversion of a directory with multiple input files works as follows:
  • do one ID extraction pass without render
  • do a global database check/fixup for all files that have been parsed which checks in one go for:
    • check that all cross reference targets exist.
      When using the \x magic argument:
      • only one of the plural/singular needs to exist
      • we then decide which one to use and delete the other one. Both are initially placed in the database during the ID extraction phase.
    • duplicate IDs
    • references from one non-header title to another non-header title as mentioned at \x within title restrictions
    Ideally, failure of any of the above checks should lead to the database not being updated with new values, but that is not the case as of writing.
  • do one conversion pass with render. To speed up conversion, we might at some point start storing a parsed JSON after the first conversion pass, and then just deserialize it and convert the deserialized output directly without re-parsing.
The two pass approach is required to resolve cross references
One of the main two passes done during conversion, where the files are parsed and all references stored in the database.
The implementation of much of the functionality of OurBigBook involves manipulating the abstract syntax tree.
The structure of the AST is as follows:
  • AstNode: contains a map from argument names to the values of each argument, which are of type AstArgument
  • AstArgument: contains a list of AstNode. These are generally just joined up on the output, one after the other.
    One important exception to this are plaintext nodes. These nodes contain just raw strings instead of a list of arguments. They are usually the leaf nodes.
We can easily observe the AST of an input document by using the --log following log options:
ourbigbook --log=ast-simple input.bigb
ourbigbook --log=ast input.bigb
For example, the document:
= My title
{c}

A link to \x[another-title]{c}{p} and more text.

== Another title
produces with --log=ast-simple the following output:
ast Toplevel
  arg content
    ast H id="tmp"
      arg c
      arg level
        ast plaintext "1"
      arg numbered
        ast plaintext "0"
      arg scope
        ast plaintext "0"
      arg splitDefault
        ast plaintext "0"
      arg synonym
        ast plaintext "0"
      arg title
        ast plaintext "My title"
    ast P id="p-1"
      arg content
        ast plaintext "A link to "
        ast x
          arg c
          arg child
            ast plaintext "0"
          arg full
            ast plaintext "0"
          arg href
            ast plaintext "another-title"
          arg p
          arg parent
            ast plaintext "0"
          arg ref
            ast plaintext "0"
        ast plaintext " and more text."
    ast Toc id="toc"
    ast H id="another-title"
      arg c
        ast plaintext "0"
      arg level
        ast plaintext "2"
      arg numbered
        ast plaintext "0"
      arg scope
        ast plaintext "0"
      arg splitDefault
        ast plaintext "0"
      arg synonym
        ast plaintext "0"
      arg title
        ast plaintext "Another title"

12.4.3. Autogenerated tests

words: 59 articles: 1
The following scripts generate parametrized OurBigBook examples that can be used for performance or other types of interactive testing:
  • ./generate-deep-tree 2 5 > deep_tree.tmp.bigb
    ./ourbigbook deep_tree.tmp.bigb
    Originally designed to be able to interactively play with a huge table of contents to streamline JavaScript open close interaction.
./generate-paragraphs 10 > main.bigb
Output:
0

1

2

3

4

5

6

7

8

9
We have stopped making any effort to generate nicely indented HTML output as it just felt not worth it.
Instead, if you want to debug some badly formatted HTML you can just use our pre-installed js-beautify dependency, e.g. with:
npx js-beautify out/html/index.html

12.6. Performance

words: 503 articles: 2
To log some performance statistics, use: performance log.
One quick and dirty option is to use generate-paragraphs which generates output compatible for most markup languages:
./generate-paragraphs 100000 > tmp.bigb
On Ubuntu 20.04 Lenovo ThinkPad P51 for example:
  • OurBigBook 54ba49736323264a5c66aa5d419f8232b4ecf8d0 + 1, Node.js v12.18.1
    time ./ourbigbook tmp.bigb
    outputs:
    real    0m5.104s
    user    0m6.323s
    sys     0m0.674s
  • Asciidoctor 2.0.10, Ruby 2.6.0p0:
    cp tmp.bigb tmp.adoc
    time asciidoctor tmp.adoc
    outputs:
    real    0m1.911s
    user    0m1.850s
    sys     0m0.060s
  • cmark 0.29.0:
    cp tmp.bigb tmp.md
    time cmark tmp.md > tmp.md.html
    outputs:
    real    0m0.091s
    user    0m0.070s
    sys     0m0.021s
    Holy cow, it is 200x faster than Asciidoctor!
  • markdown-it at 5789a3fe9693aa3ef6aa882b0f57e0ea61efafc0 to get an idea of a JavaScript markdown implementation:
    time markdown-it tmp.md > tmp.md.html
    outputs:
    real    0m0.361s
    user    0m0.590s
    sys     0m0.060s
  • cat just to find the absolute floor:
    time cat tmp.bigb > tmp.tmp
    outputs:
    real    0m0.006s
    user    0m0.006s
    sys     0m0.000s
On P51:

12.7. Internals API

words: 179 articles: 2
Tokenized token stream and AST can be obtained as JSON from the API.
Errors can be obtained as JSON from the API.
Everything that you need to write OurBigBook tooling, is present in the main API.
All tooling will be merged into one single repo.
Every OurBigBook document is implicitly put inside a \Toplevel document and:
  • any optionally given arguments at the very beginning of the document will be treated as arguments of the \Toplevel macro
  • anything else will be put inside the content argument of the \Toplevel macro
E.g., a OurBigBook document that contains:
{title=My favorite title}

And now, some content!
is morally equivalent to:
\Toplevel{title=My favorite title}
[
And now, some content!
]
In terms of HTML, the \Toplevel element corresponds to the <html>, <head>, <header> and <footer> elements of a document.
Trying to use the \Toplevel macro explicitly in a document leads to an error.

12.8. CSS

words: 134 articles: 2
Our CSS is located at main.scss and gets processed through Sass.
To generate the CSS during development after any changes to that file, you must run:
npm run sass
which generates the final CSS file:
main.css
You then need to explicitly include that main.css file in your --template. For example, our ourbigbook.liquid.html contains a line:
<link rel="stylesheet" type="text/css" href="{{ root_relpath }}main.css">
where root_relpath is explained under Section 5.5.25. "--template".
The file ourbigbook.common.scss contains stand-alone Sass definitions that can be used by third parties.
One use case is to factor out OurBigBook style with the site-specific boilerplate.
E.g. a website that stores its custom rules under main.scss can do stuff like:
@import 'ourbigbook/ourbigbook.common.scss';
The main design goal on narrow screens is that there should never be horizontal scrolling enabled for the hole document, only on a per element basis.
Every foreign key should have a manually created associated index, this is not done automatically by neither PostgreSQL nor Sequelize:
TODO. Describe OurBigBook's formal grammar, and classify it in the grammar hierarchy and parsing complexity.

12.11. Release procedure

words: 227 articles: 3
Before the first time you release, make sure that you can login to NPM with:
npm login
This prompts you to login via the browser with 2FA. Currently you can also tick a box to not ask again for the next 5 minutes, which should be enough for the following release command. If you don't select this option, you will be prompted midway through the release command for login.
Releases should always be made with the official www.npmjs.com/~ourbigbook-admin NPM user.
Then, every new release can be done automatically with the release script, e.g. to release a version 0.7.2:
./release 0.7.2
or to just increment the minor version, e.g. from the current 0.7.1 to 0.7.2 you could can omit the version argument:
./release
That script does the following actions, aborting immediately if any of them fails:
  • runs the tests
  • publishes this documentation
  • updates version in package.json
  • creates a release commit and a git tag for it
  • pushes the source code
  • publishes the NPM package
After publishing, a good minimal sanity check is to ensure that you can render the template as mentioned in play with the template:
cd ~
# Get rid of the global npm link development version just to make sure it is not being used.
npm uninstall -g ourbigbook
git clone https://github.com/ourbigbook/template
cd template
npm install
npx ourbigbook .
firefox out/html/index.html
npm install -g @vscode/vsce
cd vscode
vsce package
vsce publish
The repository cirosantilli.com/ourbigbook-media contains media for the project such as for documentation and publicity
It was created to keep blobs out of this repository.
Some blobs were unfortunately added to this repository earlier on, but when we saw that we would need more and more, we made the sane call and forked that out.

12.13. Project governance

words: 224 articles: 4
The OurBigBook Project currently has a single top level executive, the OurBigBook Admin, who has ultimate power over the project.
There is currently no legal incorporated entity.
These will likely change if the project ever gets any traction, but for now things are being ran in an informal manner only.

12.13.1. Team

words: 55 articles: 1
Ciro Santilli is the founder and Absolute Magnanimous All Powerful Eternal Ruler (AMAPER) of the OurBigBook Project.
Ciro is a passionate about free education that allows learners to progress as fast as they want and of User Generated Content, which allows anyone to be the teacher.
Figure 34. Ciro Santilli wearing the Sacred OurBigBook Project Hoodie!

12.13.2. OurBigBook Admin

words: 122 articles: 1
Toplevel executive of the OurBigBook Project, who has ultimate power over the project.
OurBigBook.com account: ourbigbook.com/ourbigbook
GitHub account: github.com/ourbigbook-admin
OurBigBook Admins can select one article from any user to be pinned to the website's "front index pages" such as the global article, topics or user indexes.
The typical use case of this feature is to facilitate user onboarding, and it could also be used for general server announcemnets.
To modify the pinned article, admins must visit the "Site Settings" page under: ourbigbook.com/go/site-settings. That page can be accessed via the "Site Settings" button at the bottom of each index page.
Figure 36. Edit the pinned article setting in the site settings page.
Figure 37. The selected article now shows on the homepage for all users to see.

12.14. Publicity

words: 4k articles: 28

12.14.1. Official accounts

words: 51 articles: 2
twitter.com/OurBigBook (case insensitive)

12.14.2. Merchandise

words: 591 articles: 7
12.14.2.1. Clothing
words: 339 articles: 1
We are thinking about the following layout:
  • front: "OurBigBook.com" on upper left chest text
  • back:
    • "OurBigBook.com" on top across back. This positioning is crucial as it will show above chairs in amphitheatres
    • logo below text centered
We are planning on using a clear green on white color scheme to reflect the current website CSS.
TODO ideas: understand what students wear, and the copy it with our logo. E.g. for Oxbridge, one could design college puffer jackets.
All items are available at: www.tshirtstudio.com/marketplace/ourbigbook-com from tshirtstudio.com. Each sale includes a 5 dollar/euro/pound donation to the project.
This is a reasonable website.
It is a shame that you can't easily drag and drop move/resize images on the web UI, which has led us to do that manually on the in the source images.
But still, relatively easy to use, and easy to setup a marketplace in.
Another downside is that it does not seem possible to edit existing designs, so it is a bit hard to know exactly what you had done when it is time to update things.
Figure 38. TShirt Studio black t-shirt front.
Very slightly too straight on shoulders, but not bad. The front color is a bit off/too white-ish, but not terrible.
This is a picture from version one, which did not have the project slogan. Version one is no longer available for sale, only the new one with the slogan.
Figure 39. TShirt Studio black t-shirt back. Buy at: www.tshirtstudio.com/marketplace/ourbigbook-com/ourbigbook-com-black-t-shirt.
Figure 40. TShirt Studio black hoodie front.
Good quality, but the material is slightly warmer than I'd like, I tend to prefer slightly fluffier ones.
This is a picture from version one, which did not have the project slogan. Version one is no longer available for sale, only the new one with the slogan.
It was slighty concerning if the hoodie would cover the URL or not, but it does not do often in practice.
Figure 41. TShirt Studio black hoodie back. Buy at: www.tshirtstudio.com/marketplace/ourbigbook-com/ourbigbook-com-black-zip-hoodie-2.
Figure 42. TShirt Studio white t-shirt front. Buy at: www.tshirtstudio.com/marketplace/ourbigbook-com/ourbigbook-com-white-t-shirt.
Figure 43. TShirt Studio white t-shirt back. Buy at: www.tshirtstudio.com/marketplace/ourbigbook-com/ourbigbook-com-white-t-shirt.
12.14.2.2. Sticker
words: 252 articles: 4
Figure 44. Sticker SVG. This SVG is our original sticker design for laptops.
Two sticker widths: 5 inch (12.7 cm) or 7.5 inch (19 cm). Also does t-shirts and hoodies. Design not showing on newly created shop page after several refreshes.
Rectangle widths available: 12.7 cm and 17.8 cm, which is reasonable. Both £5.99. Also does t-shirts and hoodies. Good design UI.
Only has rectangle of width 11.43 cm, price £5.70. Also does t-shirts and hoodies. Design UI a cluttered.

12.14.3. Project identity

words: 819 articles: 5
The logo can be seen at: Figure 1. "Logo of the OurBigBook Project".
Figure 46. logo.svg.
Canonical project logo.
This SVG file was actually manually created, and therefore counts as code and can be tracked on this git repository.
Since it does not contain text, only geometric primitives, this SVG file does not rely on any external system fonts and is fully reproducible.
Figure 47. logo-256.png. 256x256 PNG version of Figure 46. "logo.svg", ideal for profile pictures that don't support SVG. Generated with:
convert logo.svg -resize 256x logo-256.png
Figure 48. logo-transparent.svg.
This is a version of logo.svg with a transparent background instead of the hardcoded black background.
It was useful e.g. for t-shirt merch, where the t-shirt background choices were not perfectly black, and the black square would be visible (and possibly glossy) otherwise, which would not be nice.
Figure 49. logo-transparent-with-text.svg. This version of the logo was useful when designing project T-shirts on tshirtstudio.com. On that website, you can't easily resize images with drag and drop, so:
  • leaving some extra margin at the top would make the text more likely visible considering the hoodie
  • leaving some extra margin around allows us to make the image a bit less huge and imposing
Figure 50. logo-transparent-with-text-and-slogan-2000.png. This is perhaps a superior alternative to Figure 49. "logo-transparent-with-text.svg" for merchandise, as the project slogan could clarify further what the merchandise is all about.
Figure 51. logo-transparent-with-text-and-slogan-2000-2150.png.
This is the same as logo-transparent-with-text-and-slogan-2000.png but with a 150 px border added to the top to ensure that the tshirtstudio.com hoodie hood won't hide the URL.
It was created with:
convert logo-transparent-with-text-and-slogan-2000.png -gravity north -background transparent -splice 0x150 logo-transparent-with-text-and-slogan-2000-2150.png
Some rationale:
  • the lowercase b followed by uppercase B gives the idea of big and small
  • the small o looks a bit like a degree symbol, which feels sciency. It also contributes to the idea of small to big: o is smallest, b a bit larger, and B actually big
  • keep the same clear on black feeling as the default CSS output
  • yellow, green and blue are the colors of Brazil, where Ciro Santilli was born!
It might be cool if we were able to come up with something that looks more like an actual book though instead of just using a boring lettermark.
A good point of the current design is that it suggests a certain simplicity. We want the explanations of our website to be simple and accessible to all.
In addition to the pictorial logo, we have also created a few textual logos which might be useful.
We first designed them as a way to take up upper left chest square space nicely on tshirtstudio.com T-shirts, as a long one line version of ourbigbook.com would be too small and unreadable.
The main idea of the text logo is to make a letter square with uppercase monospace font letters:
OUR
BIG
BOOK
.COM
Could make the OBB red and other letters white. But that does come a bit closer to our dreaded ÖBB name competitor.
Note that monospace fonts are not actually square, only fixed width: graphicdesign.stackexchange.com/questions/45260/name-for-type-that-has-the-same-width-and-height
Another idea to differentiate from ÖBB would be to go lowercase:
obb
We were thinking something like:
Learn for real!
but we wonder if that wouldn't be too close to: www.learningforreal.org/. Maybe not.
Another one that is also somewhat taken is:
You be the teacher
www.teacherspayteachers.com/Product/You-Be-The-Teacher-Independent-Research-Project-Distance-Learning-3785062
A free domain name was the key restriction.
We almost went with destroyuni.com!!! But Ciro regained his senses in the end. A two word domain would be sweet though.
But Ciro was very happy with OurBigBook. Some other <possessive><adjective><noun> domains:
Figure 58. Topics page banner.
Initial project banner showing the OurBigBook Web topics feature. Not very subtle, but will do as a placeholder.
The downside of this is that much of its bottom left is hidden by the profile picture on websites such as Twitter and LinkedIn.
The banner is also a bit narrow for certain websites, and either looks rescaled, or is outright not allowd with editing, e.g. YouTube requires a minium width of 1024, with 2048 recommended.
YouTube is also extremelly picky and hard to make the banner look right as it reserves mandatory huge height for TV displays! The best approach we can find is to make the image huge and fill in black with:
convert banner-topics-signed-in-800.png -background black -gravity center -extent 2000x1000 tmp.png
and then drag the image selection so that the desktop view covers the area we care about.
Websites that accept banners:
  • Twitter
  • LinkedIn
  • YouTube
  • Reddit
  • Patreon
  • Facebook
Demo videos are uploaded to the official YouTube account: www.youtube.com/@OurBigBook
The video files together with the assets used to make them are also made available in the OurBigBook media repository under the video/ directory.
Video guidelines:
  • desktop recording area size: 720x720. This could perhaps be optimized, but this is a reasonable size that works both as an YouTube Short and Twitter post.
    Previously we had been using 700x700, but at some point YouTube appears to have stopped generating 720p resolution for those, and 480p is just too bad.
    We've been happily using vokoscreenNG.
    A good technique is to move the recording window to the lower left bottom of the screen, which stops things from floating around too much.
  • use Chromium/Chrome to record
  • resize window to fit recording area horizontally by using the Ctrl + Shift + C debugger view. Make sure to also resize the browser window vertically (cannot be done on debugger, needs resizing actual window) otherwise you won't be able to scroll if the page is not taller than the viewport.
  • be careful about single pixel black border lines straying in the recording area, they are mega visible against the clear chrome browser bar on the finished output!
  • music style guidelines: cool, beats, techno, mysterious, upbeat
    Some of the videos contain non-fully free YouTube music added via the YouTube UI. Reupload together with the video files appears however allowed. Ideally we should use fully CC BY-SA music, but it is quite hard to find good ones. NC is not acceptable.
  • hardcode subtitles in the video. No voice. Previously we were using Aegisub to create the subtitles in .ass format and ffmpeg to hardcode:
    ffmpeg -i raw.mkv -vf subtitles=sub.ass out.mkv
    but later we learnt about KDenlive support for subtitles and moved to that instead as it is even more convenient to have it all in one place. Use:
    • 22pt white font with black background to improve readability
    • aim to have 3/4 lines of subtitle maximum per frame
    When recording, make sure that all key mouse action happens on the top half of the viewport, otherwise it will get covered by the subtitles in downstream editing.
  • on YouTube, add the video as the first video of the "Videos" playlist: www.youtube.com/playlist?list=PLshTOzrBHLkZlpvTuBdphKLWwU7xBV6VF This list is needed because otherwise YouTube's stupid "Shorts" features produces two separate timelines by default, one for shorts and one for non-shorts. With this list, all videos can be seen easily as non-shorts.

12.14.5. News

articles: 5
This section is present in another page, follow this link to view it.

12.14.6. OurBigBook.com Fellowship

words: 928 articles: 3
The OurBigBook Project has sporadically offered a fellowship called the "OurBigBook.com Fellowship". Its recipients are called the "OurBigBook.com Fellows".
The goal of the fellowship is to pay brilliant students to focus exclusively on pursuing ambitious goals in STEM research and education for a pre-determined length of time, without having to worry about earning money in the short term.
The fellowship is both excellency and need based, focusing on brilliant students from developing countries whose families were not financially able to support them.
Being brilliant, such students would be tempted and able to go for less ambitious jobs that pay them the short term. The goal of the fellowship is to free such students to instead pursue more ambitious, longer term goals.
Or in other words: to allow smart people to do whatever the fuck they really want to do.
The fellowship is paid as a single monetary transfer to the recipient.
There are no legally binding terms to the fellowship: we pick good people and trust them to do what they think is best.
The fellowship is more accurately simply a donation. There is no contract. Whatever happens, the OurBigBook Project will never able to take legal action against a recipient for not "using well" their donation.
The following ethical guidelines are however highly encouraged:
  • to acknowledge the funding where appropriate, e.g.:
    • at "funding slide" (usually the last one) of a presentation for work done during, or that you feel is a direct consequence of the fellowship
    • by marking yourself as a "OurBigBook.com Fellow" on LinkedIn, under the organization: www.linkedin.com/company/ourbigbook for the period of award
  • keep in touch. Let us know about any large successes (or failures!) you've have as the consequence of the funding, e.g. publications, starting a cool new job, or deciding to quit academia.
  • give back culture: if one day, in a potentially far and undefined future, recipients achieve a stable financial situation with some money to spare, they are encouraged to give back to the OurBigBook.com Fellowship fund an amount at least equal to their funding.
    This enables us to keep sustainably investing in new brilliant talent who needs the money.
    We are more than happy to take the consider the fellow's suggestion for a recipient of their choice.
    Remember that an investment in the American stock market doubles every 10 years. So if you do go into a money making area, can you as a "person investment", match, or even beat the market? :-) Or conversely, the sooner you give back, the less you are morally required to give back.
    Fellows who go on to work on charitable causes, which includes the incredibly underpaid academics jobs, absolutely don't have to give back.
    If you are able to give back by doing a corresponding amount of good to the world all the better.
    It is you that have to look into your heart and decide: how much free or underpaid work have I done? And then if there is some money left after this consideration, you give that amount back.
  • pivoting is OK. If you decide half way that your initial project plan is crap, change! We can only notice that something won't work once we try to do it for real. At least now you know!
    If you do pivot to something that makes money immediately however, the correct thing to do is to return any unused funds of the fellowship. The sooner you pay, the lesser your moral dividend obligation, right?
  • be bold. Don't ever think "I'll take this safer option because it will allow me to pay back earlier".
    The entire goal of the scholarship is to allow smart people to take greater risks. If you took the risk, e.g. made a startup instead of going to a safer job, failed, and that made you make less money than you would have otherwise, no problem, deduce that cost from the value you can return in the future, and move on.
    But if you take a bet and it pays big time, do remember us ;-)
We also encourage fellows to take good care of their health, and strive for a good work/life balance. Exercise. Eat well. Rest. Don't work when you're tired. Take time off if when you are stressed. Keep in touch with good friends and family. Talk to someone if you feel down. Taking good care of yourself pays back with great dividends in the long run. Invest in it.
12.14.6.1. OurBigBook.com Fellows
words: 191 articles: 2
This section lists current and past OurBigBook.com Fellows. It is a requirement of the fellowship that fellows should be publicly listed here.
Publicly known updates on related to their fellowship projects may also be added here where appropriate, notably successes! But we also embrace failure. All must know that failure is a possibility, and does happen. If you can't fail, you're not dreaming big enough. Failing is not bad, it is inevitable.
12.14.6.1.1. 2022
words: 119 articles: 1
2022-12: Letícia Maria Paz De Lima is awarded 10,000 Brazilian Real to help her:
Focus on her quantum computing studies and research until 2023-06-30 (end of her third year), with the future intention of pursuing a PhD abroad in that area.
At the time of award, Letícia was a 3rd year student at the Molecular Sciences Course of the University of São Paulo and held a FAPESP Scientific Initiation Scholarship. She had become interested in Quantum Computing in the past year, and is passionate about working on that promising area of technology.
Her main mentors in the area have been professor Paulo Nussenzveig and Barbara Amaral of the Institute of Physics of the University of São Paulo.

Synonyms