12.15.6.4.43. Allow showing article body on article lists
OurBigBook.com: The OurBigBook Project is creating the ultimate open source tools to publish textbooks/personal knowledge bases/Zettelkasten/digital gardens in the learn in public philosophy. It is our best shot yet at the final real-world Encyclopedia Galactica by allowing effective mind melding/collective intelligence via the topics feature.
OurBigBook CLI quick start using our template project github.com/ourbigbook/template:
The HTML output is visible at: ourbigbook.github.io/template
git clone https://github.com/ourbigbook/template
cd template
npm install
npx ourbigbook .
firefox out/html/index.html
To publish your project as a static website to GitHub Pages, first create a GitHub repository for it, e.g.
and your project is now published to johnsmith.github.io/template
johnsmith/mybrain
and then:
git remote set-url origin git@github.com:ourbigbook/ourbigbook-private.git
ourbigbook --publish
To publish your project to OurBigBook.com, create an account at ourbigbook.com/go/register and then:
and now your project is published at: ourbigbook.com/johnsmith.
ourbigbook --web
A full blown showcase knowledge base can be seen at:
- cirosantilli.com (static website publish)
- ourbigbook.com/cirosantilli (equivalent on OurBigBook.com with dynamic website features such as topics)
- github.com/cirosantilli/cirosantilli.github.io (source code for both of the above)
Mission: to live in a world where you can learn university-level mathematics, physics, chemistry, biology and engineering from perfect free open source books that anyone can write to get famous.
Ultimate goal: destroy the currently grossly inefficient education system and replace it with a much more inspiring system where people learn what they want as fast as possible to reach their goals faster without so much useless pain.
How to get there: create a website (live at OurBigBook.com) that incentivizes learners (notably university students taking courses) to write freely licensed university-level natural science books in their own words for free. Their motivation for doing that are:
- getting their knowledge globally recognized and thus better jobs
- improving the world
- learning by teaching
Notable features:
- topics: groups the articles of different users about the same topic, sorted by upvote to achieve mind melding/collective intelligence. This makes your articles easier for others to find.
- local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published with OurBigBook CLI to either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com: to get awesome multi-user features like OurBigBook Web topics
- as HTML files to a static website: you can host yourself for free on many external providers like GitHub Pages, and remain in full control
- infinitely deep table of contents: never again be limited to only 6 levels of HTML
h6
legacy limitations! With OurBigBook, the sky is the limit!Furthermore, with our dynamic article tree of OurBigBook Web, every header can be the toplevel header for better SEO and user experience e.g. both the following pages show all their ancestors:
Key links:
- OurBigBook.com: reference OurBigBook Web instance
- donate to the OurBigBook Project: donate
- project announcements: Section 12.15.5. "News". Also posted in shorter form to Section 12.15.1. "Official accounts" such as:
- cirosantilli.com/ourbigbook-com: further rationale behind the project by the founder Ciro Santilli
- cirosantilli.com: showcase static demo document with interesting content, published with OurBigBook CLI. Primary inspiration for OurBigBook development.
- cirosantilli.com/oxford-nanopore-river-bacteria: a self-contained tutorial style part of the above. Note how internal links integrate seamlessly into the more global topic of biology, e.g. when talking about DNA we link to the global topic cirosantilli.com/dna.
- github.com/cirosantilli/cirosantilli.github.io and github.com/cirosantilli/cirosantilli.github.io/blob/dev/oxford-nanopore-river-bacteria.bigb: source of the above showcase documents
- Section 3.2. "Design goals": OurBigBook Markup and OurBigBook CLI feature overview
- github.com/ourbigbook/ourbigbook: OurBigBook source code
- github.com/ourbigbook/ourbigbook/issues: project issue tracker
- github.com/ourbigbook/ourbigbook/blob/master/README.bigb: source for this document
- docs.ourbigbook.com: rendered version of this document
- docs.ourbigbook.com/_obb/dist/editor: live in-browser editor demo
- github.com/ourbigbook/template: good template to get started with OurBigBook CLI, see Section 5.2. "OurBigBook CLI quick start"
- cirosantilli.com/ourbigbook-media: media for the project such as for documentation and publicity, more info: Section 12.13. "OurBigBook media repository"
To donate:
- cirosantilli.com/sponsor: give money directly to Ciro Santilli
- buy project merchandise, see: Section 12.15.2. "Merchandise"
All donated money currently just goes directly to Ciro Santilli's personal bank account, the project founder and current lead. If things ever take off we will set up a legal entity to make things cleaner. One may dream. But for now, it would just add unnecessary overhead. Related: Section 12.14. "Project governance".
Ciro announces funding milestones and transparently accounts for all donations at: cirosantilli.com/sponsor. When milestones are reached, he quits his day job and works full time on the project for a given amount of time.
We are also happy to discuss paid contracts to implement specific features, to get in touch see: contact.
The following sections cover different ways to use tools from the OurBigBook:
- OurBigBook Web user manual: manual for OurBigBook Web, the dynamic website. With this approach, you can write content on the browser without downloading anything, and save it on our database.
- convert local
.bigb
files with either:This method allows you to publish either as: - OurBigBook Markup quick start: covers specifically OurBigBook Markup, which is the markup language you use to write content in OurBigBook, both OurBigBook Web and OurBigBook CLI.This is currently the only way to write OurBigBook content, but we would really like to add WYSIWYG editor support one day!
- cross references to any header (including e.g. h2, h3, etc. in other files), images, etc. with amazing error checking and reporting: never break internal links without knowing again, and quickly find out what broke when you do. E.g.:animal.bigbMammal.bigb
= Animal <Bats> are <flying animals>.
= Flying animal == Bat
Animal.bigb
would render something like:The following would fail and point you out the file and line of the failure:<a href="flying-animal.html#bat">Bats</a> are <a href="flying-animal.html">flying animals</a>.
- nonexistent id:
<Weird animal not documented>
- duplicate IDs:
= Animal == Dog == Cat == Dog
- nonexistent id:
- KaTeX server side mathematics, works on browsers with JavaScript disabled:
I like $\sqrt{2}$, but I adore this \x[equation-quadratic-equation]: $$ x^2 + 2x + 1 $$ {title=Quadratic equation}
- multi-file features out of the box so you don't need a separate wrapper like Jekyll to make a multi-page website:
- cross file references
- single-source multi-format output based on includes and build options:
- by default, one HTML per source with includes rendered as links between pages, e.g.:README.bigbnot-readme.bigb
= My website == h2 \Include[not-readme]
produces= Not readme == Not readme h2
index.html
andnot-readme.html
- with the
-S
,--split-headers
option, you can output each header of an input file into a separate output file. The previous filesystem would produce:Each of those pages automatically gets a table of contentsindex.html
: which contains the fullREADME.bigb
outputsplit.html
: split version of the above containing only the= My website
header and noth2
h2.html
: only contains theh2
headernot-readme.html
contains the full output ofnot-readme.bigb
not-readme-split.html
: only contains the= Not readme
headernot-readme-h2.html
: only contains the= Not readme h2
header
--embed-includes
single file output from multiple input files. Includes are parsed smartly, not just source copy pasted, e.g. included headers are shifted fromh1
toh2
correctly.On the previous sample filesystem, it would produce a single output fileindex.html
which would contain a header structure like:= My website == h2 === Not readme ==== Not readme h2
- supports both local serverless rendering to HTML files for local viewing, and server oriented rendering such as GitHub pages, e.g. cross references automatically get
.html
extension and or not. E.g.:- locally, a link
\x[not-readme]
would render as<a href="not-readme.html">
andnot-readme.bigb
producesnot-readme.html
- when publishing,
\x[not-readme]
would render as<a href="not-readme">
andnot-readme.bigb
also producesnot-readme.html
, which the server converts to justhttp://my-website.com/not-readme
- locally, a link
- cross file configuration files to factor out common page parts like headers, footers and other metadata, e.g.:
ourbigbook.liquid.html
: Liquid template used for all pages, see example at: Section 5.2.1. "Play with the template"main.scss
: CSS stylesheet generated from SASS input, see example at: Section 5.2.1. "Play with the template"ourbigbook.tex
: global LaTeX math definitions, e.g.:and then you can use:\newcommand{\abs}[1]{\left|#1\right|}
in any .bigb file of the project.$\abs{x}$
ourbigbook.json
: per repository configuration options
- table of contents that crosses input files via includes. E.g. in:README.bigbnot-readme.bigb
= My website == h2 \Include[not-readme]
the table of contents for= Not readme == Not readme h2
index.html
also contains the headers fornot-readme.bigb
producing:This means that you can split large splitDefault input files if rendering starts to slow you down, and things will still render exactly the same.- My website
- h2
- Not readme
- Not readme h2
- Not readme
- h2
- My website
- check that local files and images linked to actually exist:
\a
external
argument. E.g.:would lead to a build error.\a[i-don-exist.txt]
- associate headers to files or directories with the
\H
file
argument e.g.:would automatically add a preview of the image on the output. Display files and their metadata nicely directly on your static website rather than relying exclusively on GitHub as a file browser.Here's an example of a nice image: \x[path/to/my/image.png]{file}. = path/to/my/image.png {file} This image was taken when I was on vacation!
- advanced header/ID related features:
- ID-based header levels:
= Furry animal I like \x[furry-animal]{p}, especially my cat, here is his photo: \x[image-my-cat]. == Cat \Image[My_cat.jpg] {title=My cat}
- scopes either with directories or with within a single file:
See the important conclusion of my experiment: \x[report-of-my-experiment/conclusion] = Report of my experiment {scope} == Introduction == Middle == Conclusion
- cross reference title inflection for capitalization and pluralization, e.g.;
would render:
= Dog == Snoopy {c} \x[dog]{c}{p} are fun. But the \x[dog] I like the most is \x[snoopy]!
\x[dog]{c}{p}
asDogs
: capitalized because of{c}
and pluralized because of{p}
\x[dog]
asdogs
: auto lowercased because its header= Dog
does not have{c}
\x[snoopy]
asSnoopy
: title capitalization kept to upper case due to{c}
on the header== Snoopy
- synonyms, e.g.:would render something like:
= User interface = UI {c} {synonym} {title2} \x[user-interface]{c} is too long, I just say \x[ui].
Furthermore, this also generates a output file:<a href="#user-interface">User interface</a> is too long, I just say <a href="user-interface">UI</a>
which redirects to the ainui.html
user-interface.html
, so it serves as a way to have backward compatibility on page renames.And thetitle2
makes it appears on the main title under parenthesis, something like:<h1>User interface (UI)</h1>
- header disambiguation, e.g.:
which renders something like:
My favorite fruits are \x[apple-fruit]{p}! My favorite least favorite brand is is \x[apple-company]! \x[apple] computers are too expensive. == Apple {disambiguate=fruit} == Apple {c} {disambiguate=company} = Apple {c} {synonym}
\x[apple-fruit]{p}
:<a href="apple-fruit">apples</a>
\x[apple-company]
:<a href="apple-company">Apple</a>
\x[apple]
: also<a href="apple-company">Apple</a>
because of the synonym== Apple\n{disambiguate=fruit}
:<h2 id="apple-fruit">Apple (fruit)</h2>
== Apple\n{disambiguate=company}
:<h2 id="apple-company">Apple (company)</h2>
- tags are regular headers:
\H
child
argument,\x
child
argument= Animal == Dog {tag=domestic} {tag=cute} == Cat {tag=domestic} {tag=cute} == Bat {tag=flying} = Flying = Cute = Domestic
- unlimited header levels, levels higher than 6 are rendered in HTML as an appropriately styled
div
s with an ID:= h1 == h2 === h3 ==== h4 ===== h5 ====== h6 ======= h7 ======== h8
- generate lists of incoming links between internal headers: it shows every internal link coming into the current page
- ID-based header levels:
- automatic file upload and directory listing of non OurBigBook files:
_raw
directory, e.g.: - is written in JavaScript and therefore runs natively on the browser to allow live previews as shown at: docs.ourbigbook.com/_obb/dist/editor
- helps you with the publishing:
ourbigbook --publish
publishes in a single command to the configured target (default GitHub Pages)- OurBigBook tries to deal with media such as images and video intelligently for you, e.g.: Section 4.2.8.2. "Where to store images". E.g. you can keep media in a separate media repository,
my-media-repository
, and then by configuring onourbigbook.json
:you can use images in that repository with:"media-providers": { "github": { "default-for": ["image", "video"], "path": "media", "remote": "yourname/myproject-media" } }
instead of:\Image[My_image_basename.jpg]
\Image[https://raw.githubusercontent.com/cirosantilli/myproject--media/master/My_image_basename.jpg]
inotifywait
watch and automatically rebuild with-w
,--watch
:ourbigbook --watch input-file.bigb
- automatic code formatting:
--format-source
OurBigBook is designed entirely to allow writing complex professional HTML and PDF scientific books, blogs, articles and encyclopedias.
OurBigBook aims to be the ultimate LaTeX "killer", allowing books to be finally published as either HTML or PDF painlessly (LaTeX being only a backend to PDF generation).
It aims to be more powerful and saner and than Markdown and Asciidoctor.
Originally, OurBigBook was is meant to be both saner and more powerful than Markdown and Asciidoctor.
But alas, as Ciro started implementing and using it, he started to bring some Markdown insanity he missed back in.
And so this "degraded" slightly into a language slightly saner than Asciidoctor but with an amazing Node.js implementation that makes it better for book writing and website publishing.
Notably, we hope that our escaping will be a bit saner backslash escapes everything instead of Asciidoctor's "different escapes for every case" approach: github.com/asciidoctor/asciidoctor/issues/901
But hopefully, having starting from a saner point will still produce a saner end result, e.g. there are sane constructs for every insane one.
It is intended that this will be an acceptable downside as OurBigBook will be used primarily large complex content such as books rather than forum posts, and will therefore primarily written either:
- in text editors locally, where users have more features than in random browser textareas
- in a dedicated website that will revolutionize education, and therefore have a good JavaScript editing interface: github.com/cirosantilli/write-free-science-books-to-get-famous-website
For example, originally OurBigBook had exactly five magic characters, with similar functions as in LaTeX:and double blank newlines for paragraphs if you are pedantic, but this later degenerated into many more with insane macro shortcuts.
\
backslash to start a macro, like LaTeX{
and}
: left and right square brackets to delimit optional macro arguments[
and]
: left and right curly braces bracket to start an optional arguments
We would like to have only square brackets for both optional and mandatory to have even less magic characters, but that would make the language difficult to parse for computer and humans. LaTeX was right for once!
This produces a very regular syntax that is easy to learn, including doing:
- arbitrary nesting of elements
- adding arbitrary properties to elements
This sanity also makes the end tail learning curve of the endless edge cases found in Markdown and Asciidoctor disappear.
The language is designed to be philosophically isomorphic to HTML to:
- further reduce the learning curve
- ensure that most of HTML constructs can be reached, including arbitrary nesting
More precisely:
- macro names map to tag names, e.g.:
\\a
to<a
- one of the arguments of macros, maps to the content of the HTML element, and the others map to attributes.E.g., in a link:the first macro argument:
\a[http://example.com][Link text\]
maps to thehttp://example.com
href
of<a
, and the second macro argument:maps to the internal content ofLink text
<a>Link text<>
.
The high sanity of OurBigBook, also makes creating new macro extensions extremely easy and intuitive.
All built-in language features use the exact same API as new extensions, which ensures that the extension API is sane forever.
Markdown is clearly missing many key features such as block attributes and cross references, and has no standardized extension mechanism.
The "more powerful than Asciidoctor" part is only partially true, since Asciidoctor is very featureful can do basically anything through extensions.
The difference is mostly that OurBigBook is completely and entirely focused on making amazing scientific books, and so will have key features for that application out-of-the box, notably:and we feel that some of those features have required specialized code that could not be easily implemented as a standalone macro.
- amazing header/ToC/ID features including proper error reports: never have a internal broken link or duplicate ID again
- server side pre-rendered maths with KaTeX: all divs and spans are ready, browser only applies CSS, no JavaScript gets executed
- publish: we take care of website publishing for you out-of-the-box, no need to integrate into an external project like Jekyll
-S
,--split-headers
:- github.com/asciidoctor/asciidoctor/issues/626 feature request
- github.com/owenh000/asciidoctor-multipage third party plugin that does it
Another advantage over Asciidoctor is that the reference implementation of OurBigBook is in JavaScript, and can therefore be used on browser live preview out of the box. Asciidoctor does Transpile to JS with Opal, but who wants to deal with that layer of complexity?
Static wiki generators: this is perhaps the best way of classifying this project :-)
- github.com/gollum/gollum: already has a local server editor! But no WYSIWYG nor live preview. Git integration by default, so when you save on the UI already generates a Git commit. We could achieve that with: github.com/isomorphic-git/isomorphic-git, would be really nice. Does not appear to have built-in static generation:Does not appear to check that any links are correct.
- github.com/wcchin/markypydia
- obsidian.md/ closed source, Markdown with cross file reference + a SaaS. Appears to require payment for any publishing. 28k followers 2021: twitter.com/obsdmd. Founders are likely Canadians of Asian descent from Waterloo University: www.linkedin.com/in/lishid/ | www.linkedin.com/in/ericaxu/ also working in parallel on dynalist.io/ 2020 review at: www.youtube.com/watch?v=aK2fOQRNSxc Has offline editor with side-by-side preview. Compares with Roam and Notion, but can't find any public publishing on those, seem to be enterprise only things.
Static book generators:
- github.com/rstudio/bookdown, bookdown.org/. Very similar feature set to what we want!!! Transpiles to markdown, and then goes through Pandoc: bookdown.org/yihui/bookdown/pandoc.html, thus will never run on browser without huge translation layers. But does have an obscene amount of output formats however.
- Hugo. Pretty good, similar feature set to ours. But Go based, so hard on browser, and adds adhoc features on top of markdown once again
- en.wikipedia.org/wiki/Personal_wiki
- github.com/hplgit/doconce
- www.gwern.net/About#source is pretty interesting, uses github.com/jaspervdj/Hakyll/ + some custom stuff.
- github.com/JerrySievert/bookmarkdown
- www.gitbook.com/
- github.com/rust-lang/mdBook. Impressive integrated search feature. Like Gitbook but implemented in Rust.
- github.com/facebook/docusaurus React + markdown based, written in TypeScript. So how can it be build fast? Gotta benchmark.
- vimdoc: vimdoc.sourceforge.net/ They do have perfectly working Internal cross file references, see any page e.g. vimdoc.sourceforge.net/htmldoc/pattern.html.
- typst: github.com/typst/typst An attempt at a LaTeX killer. Has its own typesetting engine, does not simply transpile to LaTeX. Meant to be faster and simpler to write. No HTML output as of writing: github.com/typst/typst/issues/721
Less related but of interest, similar philosophy to what Ciro wants, but no explicitly reusable system:
Ciro Santilli developed OurBigBook to perfectly satisfy his writing style, which is basically "create one humongous document where you document everything you know about a subject so everyone can understand it, and just keep adding to it".
cirosantilli.com is the first major document that he has created in OurBigBook.
He decided to finally create this new system after having repeatedly facing limitations of Asciidoctor which were ignored/wontfixed upstream, because Ciro's writing style is not as common/targeted by Asciidoctor.
Following large documents Ciro worked extensively on:made the limitations of Asciidoctor clear to Ciro, and were major motivation in this work.
The key limitations have repeatedly annoyed Ciro were:
- cannot go over header level 6, addressed at: unlimited header levels
- the need for
-S
,--split-headers
to avoid one too large HTML output that will never get indexed properly by search engines, and takes a few seconds to load on any browser, which is unacceptable user experience
OurBigBook Markup is the lightweight markup language used in the OurBigBook project.
It works both on the OurBigBook Web dynamic website, and on OurBigBook CLI static websites from the command line.
OurBigBook Markup files use the
.bigb
extension.Paragraphs are made by simplifying adding an empty line, e.g.:
My first paragraph.
And now my second paragraph.
Third one to finish.
which renders as:
My first paragraph.And now my second paragraph.Third one to finish.
Headers are created by starting the line with equal signs. The more equal signs the deeper you are, e.g.:
On OurBigBook Web, the toplevel header of each page goes into a separate title box, so there things would just look like:
= Animal
== Mammal
=== Dog
=== Cat
== Bird
=== Pigeon
=== Chicken
- title box: "Animal"
- body:
== Mammal === Dog === Cat == Bird === Pigeon === Chicken
You can can use any header as a tag of any other header, e.g.:
= Animal
== Dog
{tag=Cute animal}
== Turtle
{tag=Ugly animal}
== Animal cuteness
=== Cute animal
=== Ugly animal
Headers have several powerful features that you can read more about under
\H
arguments, e.g. \H
synonym
argument and \H
disambiguate
argument.To link to any of your other pages, you can use angle brackets (less than/greater than) signs:
Note how capitalization and pluralization generally just work.
I have a <cute animal>. <Birds> are too noisy.
To use a custom link text on a reference, use the following syntax:
I have a <cute animal>[furry animal]. <Birds>[feathery animals] are too noisy.
External links can be input directly as:
This is a great website: https://example.com
I really like https://example.com[this website].
which renders as:
This is a great website: example.comI really like this website.
Code blocks are done with backticks
and with two ore more backticks you get a code block on its own line, and possibly with multiple code lines:
`
. With just one backtick, you get a code block inside the text:
The function call `f(x + 1, "abc")` is wrong.
which renders as:
The function callf(x + 1, "abc")
is wrong.
The function:
``
function f(x, s) {
return x + s
}
``
is wrong.
which renders as:
The function:is wrong.function f(x, s) { return x + s }
Mathematics syntax is very similar to code blocks, you can just enter you LaTeX code in it:
The number $\sqrt{2}$ is irrational.
The same goes for:
$$
\frac{1}{\sqrt{2}}
$$
which renders as:
The number is irrational.The same goes for:
We also have a bunch of predefined macros from popular packages, e.g.
\dv
from the physics
package for derivatives:
$$
\dv{x^2}{x} = 2x
$$
which renders as:
You can refer to specific equations like this:
As shown in <equation Very important equation>, this is true.
$$
\frac{1}{\sqrt{2}}
$$
{title=Very important equation}
which renders as:
As shown in Equation 3. "Very important equation", this is true.
Images and videos are also easy to add and refer to:
As shown at <image Cute chicken chick>, chicks are cute.
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/H%C3%BChnerk%C3%BCken_02.jpg/800px-H%C3%BChnerk%C3%BCken_02.jpg?20200716091201]
{title=Cute chicken chick}
\Video[https://www.youtube.com/watch?v=j_fl4xoGTKU]
{title=Top Down 2D Continuous Game by Ciro Santilli (2018)}
which renders as:
As shown at Figure 10. "Cute chicken chick", chicks are cute.
Images can take a bunch of options, about which you can read more about at image arguments. Most should be self explanatory, here is an image with a bunch of useful arguments:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/H%C3%BChnerk%C3%BCken_02.jpg/800px-H%C3%BChnerk%C3%BCken_02.jpg?20200716091201]
{title=Ultra cute chicken chick}
{description=
The chicken is yellow, and the hand is brown.
The background is green.
}
{border}
{height=400}
{source=https://commons.wikimedia.org/wiki/File:H%C3%BChnerk%C3%BCken_02.jpg}
which renders as:
Lists are written by starting the line with an asterisk
*
:
* first item
* second item
* and the third
which renders as:
- first item
- second item
- and the third
A nested list:
* first item
* first item version 1
* first item version 2
* first item version 2 1
* first item version 2 2
* second item
* and the third
which renders as:
- first item
- first item version 1
- first item version 2
- first item version 2 1
- first item version 2 2
- second item
- and the third
Lists items can contain any markup, e.g. paragraphs. You just need to keep the same number of spaces, e.g.:
* first item.
Second paragraph of first item.
And a third one.
* second item
* second item v1
Another paragraph in second item v1
* second item v2
which renders as:
first item.Second paragraph of first item.And a third one.- second item
second item v1Another paragraph in second item v1- second item v2
Tables are not very different from lists. We use double pipes for headers
||
, and a single pipe |
for regular rows:
|| City
|| Sales
| Salt Lake City
| 124,00
| New York
| 1,000,000
which renders as:
City Sales Salt Lake City 124,00 New York 1,000,000
To add a title we need to use an explicit
\Table
macro as in:
See <table Sales per city> for more information.
\Table
{title=Sales per city}
[
|| City
|| Sales
| Salt Lake City
| 124,00
| New York
| 1,000,000
]
which renders as:
See Table 1. "Sales per city" for more information.
City Sales Salt Lake City 124,00 New York 1,000,000
This section documents all OurBigBook macros.
Macros are magic commands that do cool stuff, e.g.
\Image
to create an image.The most common macros also have insane macro shortcuts to keep the syntax shorter.
The general macro syntax is described at Section 4.3. "OurBigBook Markup syntax".
Insane autolink, i.e. the link text is the same as the link address:
Exact parsing rules described at: Section 4.2.1.2. "Insane link parsing rules".
The website http://example.com is cool. See also:
\Q[http://example.com/2]
which renders as:
The website example.com is cool. See also:
Note that the prefixes
http://
and https://
are automatically removed from the displayed link, since they are so common that they woudly simply add noise.Equivalent sane version:
The website \a[http://example.com] is cool.
\Q[\a[http://example.com/2]]
which renders as:
The website example.com is cool.
Insane link with custom text:
Equivalent sane version:
If the custom text is empty, an autolink is generated. This is often useful if you want your link to be followed by punctuation:
This could also be achieved with the sane syntax of course, but this pattern saves a tiny bit of typing.
The website http://example.com[example.com] is cool.
which renders as:
The website example.com is cool.
The website \a[http://example.com][example.com] is cool.
which renders as:
The website example.com is cool.
The website is really cool: http://example.com[].
which renders as:
The website is really cool: example.com.
Link to a file in the current repository:
This links to a raw view of that file.
The file \a[index.js] is cool.
which renders as:
The file index.js is cool.
Link to a directory in the current repository:
This links to an output file that contains a generated directory listing of that directory.
The directory \a[file_demo] is cooler.
which renders as:
The directory file_demo is cooler.
The link target, e.g. in:
\a[http://example.com]
href
equals http://example.com
.Important behaviors associated with this property for local links are detailed at Section 4.2.1.1.3. "
\a
external
argument":- they are checked for existence in the local filesystem
- they are modified to account for scopes with
-S
,--split-headers
Analogous to the
\x
ref
argument, e.g.:
Trump said this and that.https://en.wikipedia.org/wiki/Donald_Trump_Access_Hollywood_tape#Trump's_responses{ref}https://web.archive.org/web/20161007210105/https://www.donaldjtrump.com/press-releases/statement-from-donald-j.-trump{ref} Then he said that and this.https://en.wikipedia.org/wiki/Donald_Trump_Access_Hollywood_tape#Trump's_responses{ref}https://web.archive.org/web/20161007210105/https://www.donaldjtrump.com/press-releases/statement-from-donald-j.-trump{ref}
which renders as:
If given and true, forces a the link to be an external link.
Otherwise, the external is automatically guessed based on the address given as explained at Section 4.2.1.1.3.3. "External link".
Common use cases for the
external
argument is to link to non OurBigBook content in the current domain, e.g.:- link to the domain root path for Subdirectory deployments
- link non OurBigBook subdirectories. E.g., github.com/cirosantilli/cirosantilli.github.io/blob/master/README.bigb was rendered at cirosantilli.com, and contains links
\a[markdown-style-guide]{external}
to cirosantilli.com/markdown-style-guide, whose source lives in a separate non-OurBigBook repository: github.com/cirosantilli/markdown-style-guide/
The
\a
external
argument can be used to refer to the root of the domain. E.g. suppose that we have a subdirectory deployment under https://mydomain.com/subdir/
. Then:\a[/somepath]
refers to the directory/subdir/somepath
\a[/somepath]{external}
refers t othe directory/somepath
TODO test if it works. But we want it to be possible to deploy OurBigBook CLI static websites on subdirectories, e.g.:
If it doesn't work, it should be easy to make it work, as we use relative links almost everywhere already. Likely there would only be some minor fixes to the
https://mydomain.com/subdir/
https://mydomain.com/subdir/mathematics
--template
arguments.An external link is a link that points to a resource that is not present in the curent OurBigBook project sources.
By default, most links are internal links, e.g. it is often the case in computer programming tutorials that we want to refer to source files in the current directory. So from our
and here
README.bigb
, we could want to write something like:
Have a look at this amazing source file: \a[index.js].
which renders as:
Have a look at this amazing source file: index.js.
\a[ourbigbook]
is a internal link.A typicial external link is something like:
which points to an absolute URL.
This is great website: https://cirosantilli.com
which renders as:
This is great website: cirosantilli.com
OurBigBook considers a link relative by default if:
- it is not a URL with protocol
Therefore, the following links are external by default:and the following are internal by default:
http://cirosantilli.com
https://cirosantilli.com
file:///etc/fstab
ftp://cirosantilli.com
index.js
../index.js
path/to/index.js
/path/to/index.js
. Note that paths starting with/
refer to the root of the OurBigBook CLI deployment, not the root of the domain, see: link to the domain root path.//example.com/path/to/index.js
A link being internal has the following effects
- the correct relative path to the file is used when using nested scopes with
-S
,--split-headers
. For example, if we have:then in split header mode,= h1 == h2 {scope} === h3 \a[index.js]
h3
will be rendered toh2/h3.html
.Therefore, if we didn't do anything about it, the link toindex.js
would render ashref="index.js"
and thus point toh2/index.js
instead of the correctindex.js
.Instead, OurBigBook automatically converts it to the correcthref="../index.js"
- the
_raw
directory prefix is added to the link - existence of the file is checked on compilation. If it does not exist, an error is given.
Implemented at: github.com/ourbigbook/ourbigbook/issues/87 as
relative
, and subsequently modified to the more accurate/useful external
.The
_dir
directory tree contains file listings of files in the _raw
directory.We originally wanted to place these listings under
_raw
itself, but this leads to unsolvable conflicts when there are files called index.html
present vs the index.Analogous to the
_raw
directory, but for the \H
file
argument.OurBigBook places output files that are not the output of
.bigb
to .html
conversion (i.e. .html
output files) under the _raw/
prefix of the output.Internal links then automatically add the
_raw/
prefix to every link.For example, consider an input directory that contains:
notindex.bigb
= Hello
Check out \a[myfile.c].
The source code for this file is at: \a[notindex.bigb].
\Image[myimg.png]
myfile.c
int i = 1;
myimg.png
Binary!
After conversion with:
the following files would exist in the output directory:and all links/image references would work and automtically point to the correct locations under
ourbigbook .
notindex.html
: converted output ofnotindex.bigb
_raw/notindex.bigb
: a copy of the input source codenotindex.bigb
_raw/myfile.c
: a copy of the input filemyfile.c
_raw/myimg.png
: a copy of the input filemyimg.c
_raw
.Some live examples:
The reason why a Then, in a server that omits the
_raw
prefix is needed it to avoid naming conflicts with OurBigBook outputs, e.g. suppose we had the files:configure
configure.bigb
.html
extension, if we didn't have _raw/
both configure.html
and configure
would be present under /configure
. With _raw
we instead get:_raw/configure
: the input/configure
fileconfigure
: the HTML
A URL with protocol is a URL that matches the regular expression
^[a-zA-Z]+://
. The following are examples of URLs with protocol:http://cirosantilli.com
https://cirosantilli.com
file:///etc/fstab
ftp://cirosantilli.com
The following aren't:
index.js
../index.js
path/to/index.js
/path/to/index.js
//example.com/path/to/index.js
. This one is a bit tricky. Web browsers would consider this as a protocol-relative URL, which technically implies a protocol, although that protocol would be different depending how you are viewing the file, e.g. locally throughfile://
vs on a with websitehttps://
.For simplicity's sake, we just consider it as a URL without protocol.
Insane start at any of the recognized protocols are the ones shown at: Section 4.4.3. "Known URL protocols".absolutely anywhere if not escaped, e.g.:
renders something like:
To prevent expansion, you have to escape the protocol with a backslash
Empty domains like:
don't becomes links however. But this one does:
http://
https://
ahttp://example.com
a <a href="http://example.com">
\\
, e.g.:
\http://example.com
http://
http://a
Insane links end when any insane link termination character is found.
As a consequence, to have an insane link followed immediately by a punctuation like a period you should use an empty argument as in:
otherwise the punctuation will go in it. Another common use case is:
Check out this website: http://example.com[].
which renders as:
Check out this website: example.com.
As mentioned on the tutorial (http://example.com[see this link]).
which renders as:
As mentioned on the tutorial (see this link).
If you want your link to include one of the terminating characters, e.g.
]
, all characters can be escaped with a backslash, e.g.:
Hello http://example.com/\]a\}b\\c\ d world.
which renders as:
Hello example.com/]a}b\c d world.
Note that the
http://example.com
inside \a[http://example.com]
only works because we do some post-processing magic that prevents its expansion, otherwise the link would expand twice:
\P[http://example.com]
\a[http://example.com]
which renders as:
This magic can be observed with --help-macros
by seeing that the href
argument of the a
macro has the property:
"elide_link_only": true,
The following characters are the "insane link termination characters":Insane cross references and insane topic links with a single word terminate if any of these characters are found, see also: Section 4.2.1.2. "Insane link parsing rules".
- space
- newline
\n
- open or close square bracket
[
or]
- open or close curly braces
{
or}
OurBigBook automatically encodes all link
href
for characters that are not recommended for URLs.This way you can for example simply write arbitrary Unicode URLs and OurBigBook will escape them for you on the HTML output.
The only exception for this is the percent sign itself
%
, which it leaves untouched so that explicitly encoded URLs also work. So if you want a literal percent then you have to explicitly write it yourself as %25
.* acute a Á as raw Unicode: https://en.wikipedia.org/wiki/Á
* acute a Á explicitly escaped by user: https://en.wikipedia.org/wiki/%C3%81
which renders as:
- acute a Á as raw Unicode: en.wikipedia.org/wiki/Á
- acute a Á explicitly escaped by user: en.wikipedia.org/wiki/%C3%81
Some \b[bold] text.
which renders as:
Some bold text.
The
\br
macro inserts a visible newline between two lines without creating a paragraph.There is basically one application for this: poetry, which would be too ugly with code block due to fixed width font:
Paragraph 1 Line 1\br
Paragraph 1 Line 2\br
Paragraph 2 Line 1\br
Paragraph 2 Line 2\br
which renders as:
Paragraph 1 Line 1
Paragraph 1 Line 2Paragraph 2 Line 1
Paragraph 2 Line 2
Inline code (code that should appear in the middle of a paragraph rather than on its own line) is done with a single backtick (
and block code (code that should appear on their own line) is done with two or more backticks (
`
) insane macro shortcut:
My inline `x = 'hello\n'` is awesome.
which renders as:
My inlinex = 'hello\n'
is awesome.
``
):
``
f() {
return 'hello\n';
}
``
which renders as:
f() { return 'hello\n'; }
The sane version of inline code is a lower case
and the sane version of block math is with an upper case
c
:
My inline \c[[x = 'hello\n']] is awesome.
which renders as:
My inlinex = 'hello\n'
is awesome.
C
:
\C[[
f() {
return 'hello\n';
}
]]
which renders as:
f() { return 'hello\n'; }
The capital vs lower case theme is also used in other elements, see: block vs inline macros.
If the content of the sane code block has many characters that you would need to escape, you will often want to use literal arguments, which work just like the do for any other argument. For example:
Note that the initial newline is skipped automatically in code blocks, just as for any other element, due to: argument leading newline removal, so you don't have to worry about it.
\C[[[
A paragraph.
\C[[
And now, some long, long code, with lots
of chars that you would need to escape:
\ [ ] { }
]]
A paragraph.
]]]
which renders as:
A paragraph. \C[[ And now, some long, long code, with lots of chars that you would need to escape: \ [ ] { } ]] A paragraph.
The distinction between inline
\c
and block \C
code blocks is needed because in HTML, pre
cannot go inside P
.We could have chosen to do some magic to differentiate between them, e.g. checking if the block is the only element in a paragraph, but we decided not to do that to keep the language saner.
And now a code block outside of
\OurBigBookExample
to test how it looks directly under the \Toplevel
implicit macro:Hello
Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
HelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHello
Hello
Now with short description with math and underline:
Hello
Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
HelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHelloHello
Hello
And now a very long inline code:
Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello
Example:
See the: <code Python hello world>.
``
print("Hello wrold")
``
{title=Python hello world}
{description=Note thow this is super short unlike the C hello world!}
which renders as:
See the: Code 2. "Python hello world".print("Hello wrold")
Example:
See the: <code C hello world>.
``
#include <stdio.h>
int main(void) {
puts("hello, world");
}
``
{title=C hello world}
which renders as:
See the: Code 3. "C hello world".#include <stdio.h> int main(void) { puts("hello, world"); }
The
Comment
and comment
macros are regular macros that does not produce any output. Capitalization is explained at: Section 4.4.2. "Block vs inline macros".You will therefore mostly want to use it with a literal argument, which will, as for any other macro, ignore any macros inside of it.
Before comment.
\Comment[[
Inside comment.
]]
After comment.
which renders as:
Before comment.After comment.
And an inline one:
My inline \comment[[inside comment]] is awesome.
\comment[[inside comment]] inline at the start.
which renders as:
My inline is awesome.inline at the start.
Insane with
Insane headers end at the first newline found. They cannot therefore contain raw newline tokens.
=
(equal sign space):
= My h1
== My h2
=== My h3
Equivalent sane:
\H[1][My h1]
\H[2][My h2]
\H[3][My h3]
Custom ID for cross references on insane headers:
= My h1
{id=h1}
== My h2
{id=h2}
=== My h3
{id=h3}
Sane equivalent:
\H[1][My h1]{id=h1}
\H[2][My h2]{id=h2}
\H[3][My h3]{id=h3}
There is no limit to how many levels we can have, for either sane or insane headers!
HTML is randomly limited to
h6
, so OurBigBook just renders higher levels as an h6
with a data-level
attribute to indicate the actual level for possible CSS styling:
<h6 data-level="7">My title</h6>
The recommended style is to use insane headers up to
h6
, and then move to sane one for higher levels though, otherwise it becomes very hard to count the =
signs.To avoid this, we considered making the insane syntax be instead:
but it just didn't feel as good, and is a bit harder to type than just smashing
= 1 My h1
= 2 My h2
= 3 My h3
=
n times for lower levels, which is the most common use case. So we just copied markdown.The very first header of a document can be of any level, although we highly recommend your document to start with a
\H[1]
, and to contain exactly just one \H[1]
, as this has implications such as:\H[1]
is used for the document title: HTML document title\H[1]
does not show on the table of contents
After the initial header however, you must not skip a header level, e.g. the following would give an error because it skips level 3:
= my 1
== my 1
==== my 4
The toplevel header of a OurBigBook file is its first header and the one with the lowest level, e.g. in a document with recommended syntax:
the header
= Animal
== Dog
=== Bull Terrier
== Cat
= Animal
is the tolevel header.Being the toplevel header gives a header some special handling described in child sections of the section and elsewhere throughout this documentation.
The toplevel header is only defined if the document has only a single header of the highest level. e.g. like the following has only a single
h2
:
== My 2
=== My 3 1
=== My 3 2
Header IDs won't show for the toplevel level. For example, the headers would render like:
rather than:
This is because in this case, we guess that the
My 2
1. My 3 1
2. My 3 2
1. My 2
1.2. My 3 1
1.2. My 3 2
h2
is the toplevel.With the exception of For example:
When the OurBigBook input comes from a file (and not e.g. stdin), the default ID of the first header in the document is derived from the basename of the OurBigBook input source file rather than from its title.
The only exception to this is the home article, where the ID is empty.
For example, in file named
the ID of the header is
my-file.bigb
which contains:
= Awesome ourbigbook file
my-file
rather than awesome-ourbigbook-file
. See also: automatic ID from title.If the file is an index file other than the toplevel index file, then the basename of the parent directory is used instead, e.g. the toplevel ID of a file:
would be:
rather than:
my-subdir/README.bigb
#my-subdir
#README.bigb
For the toplevel index file however, the ID is just taken from the header itself as usual. This is done because you often can't general control the directory name of a project.
For example, a GitHub pages root directory must be named as
<username>.github.io
. And users may need to rename directories to avoid naming conflicts.As a consequence of this, the toplevel index file cannot be included in other files.
TODO: we kind of wanted this to be the ID of the toplevel header instead of the first header, but this would require an extra postprocessing pass (to determine if the first header is toplevel or not), which might affect performance, so we are not doing it right now.
If given, makes the header capitalized by default on cross file references.
More details at: Section 4.2.20.2. "Cross reference title inflection".
This
multiple
argument marks given IDs as being children of the current page.The effect is the same as adding the
\x
child
argument argument to an under the header. Notably, such marked target IDs will show up on the tagged autogenerated header metadata section.This argument is deprecated in favor of the
\H
tag
argument.Example:
renders exactly as:
= Animal
== Mammal
=== Bat
=== Cat
== Wasp
== Flying animal
{child=bat}
{child=wasp}
\x[bat]
\x[wasp]
= Animal
== Mammal
=== Bat
=== Cat
== Wasp
== Flying animal
\x[bat]{child}
\x[wasp]{child}
The header
child
syntax is generally preferred because at some point while editing the content of the header, you might accidentally remove mentions to e.g. \x[bat]{child}
, and then the relationship would be lost.The
\H
tag
argument does the same as the \x
child
argument but in the opposite direction.If given, the current section contains metadata about file or other resource with the given URL.
If empty, the URL of the file is extracted directly from the header. Otherwise, the given URL is used.
for example:
renders a bit like:
so note how:
= path/to/myfile.c
{file}
An explanation of what this file is about.
= path/to/myfile.c
{id=_file/path/to/myfile.c}
An explanation of what this file is about.
\a[path/to/myfile.c]
``
// Contents of path/to/myfile.c
int main() {
return 1;
}
``
- Also, a
_file/
prefix is automatically added to the ID. This is needed with-S
,--split-headers
to avoid a collision between:path/to/myfile.c
: the actual file_file/path/to/myfile.c
: the metadata about that file. Note that locally the.html
extension is added as infile/path/to/myfile.c.html
which avoids the collision. But on a server deployment, the.html
is not present, and there would be a conflict if we didn't add thatfile/
prefix.
- a link to the is added automatically, since users won't be able to click it from the header, as clicking on the header will just link to the header itself
- a preview is added. The type of preview is chosen as follows:
In some cases however, especially when dealing with external URLs, we might want to have a more human readable title with a non empty
which renders something like:
file
argument:
The video \x[tank-man-by-cnn-1989] is very useful.
= Tank Man by CNN (1989)
{c}
{file=https://www.youtube.com/watch?v=YeFzeNAHEhU}
An explanation of what this video is about.
The video \x[tank-man-by-cnn-1989] is very useful.
= Tank Man by CNN (1989)
{id=_file/https://www.youtube.com/watch?v=YeFzeNAHEhU}
\Video[https://www.youtube.com/watch?v=YeFzeNAHEhU]
An explanation of what this video is about.
To create a separate file with the
could contain something like:
and it would be associated to the file:
\H
file
argument set on the toplevel header, you must put it under the special _file
input directory. For example:
_file/path/to/myfile.txt.bigb
= myfile.txt
{file}
Description of my amazing file.
path/to/myfile.txt
The content of the header
= myfile.txt
is arbitrary, as it can be fully inferenced from the file path _file/path/to/myfile.txt.bigb
. TODO add linting for it. Perhaps we should make adding a header be optional and auto-generate that header instead. But having at least an optional header is good as a way of being able to set header properties like tags.This section contains some live demos of the
\H
file
argument.An explanation of what this directory is about.
Going deeper.
An explanation of what this text file is about.
Another line.
file_demo/hello_world.js
#!/usr/bin/env node
console.log('hello world')
Going deeper.
file_demo/file_demo_subdir/hello_world.js
#!/usr/bin/env node
console.log('hello world subdir')
This is a central source file that basically contains all the functionality of the OurBigBook Library, so basically the OurBigBook Markup-to-whatever (e.g. HTML) conversion code, including parsing and rendering.
Things that are not there are things that only use markup conversion, e.g.:
- OurBigBook CLI: does conversion from command line
- OurBigBook Web
This file must be able to run in the browser, so it must not contain any Node.js specifics.
It exposes the central
convert
function for markup conversion.You should normally use the packaged
_obb/ourbigbook.js
version of this file when using ourbigbook as an external dependency.This file is large, and large text files are not previewed, as they would take up too much useless vertical space and disk memory/bandwidth.
index.js was not rendered because it is too large (> 2000 bytes)
Binary files are not rendered.
file_demo/my.bin was not rendered because it is a binary file (contains \x00) of unsupported type (e.g. not an image).
An explanation of what this image is about.
Another line.
An explanation of what this video is about.
This section shows how to use a file to an arbitrary URL.
This boolean argument determines whether renderings of a header will have section numbers or not. This affects all of:This option can be set by default for all files with:
- headers themselves
- table of contents links
- cross references with the
\x
full
argument
By default, headers are numbered as in a book, e.g.:
renders something like:
= h1
== h2
=== h3
==== h4
= h1
Table of contents
* 1. h2
* 1.1. h3
* 1.1.1. h4
== 1. h2
=== 1.1. h3
==== 1.1.1. h4
However, for documents with a very large number of sections, or deeply nested headers those numbers start to be more noise than anything else, especially in the table of contents and you are better off just referring to IDs. E.g. imagine:
1.3.1.4.5.1345.3.2.1. Some deep level
When documents reach this type of scope, you can disable numbering with the
numbered
option.This option can be set on any header, and it is inherited by all descendants.
The option only affects descendants.
E.g., if in the above example turn numbering off at
then it renders something like:
h2
:
= h1
== h2
{numbered=0}
=== h3
==== h4
= h1
Table of contents
* 1. h2
* h3
* h4
== 1. h2
=== h3
==== h4
The more common usage pattern to disable it on toplevel and enable it only for specific "tutorial-like sections". An example can be seen at:which is something like:
then it renders something like:
Note how in this case the number for
- cirosantilli.com/: huge toplevel wiki, for which we don't want numbers
- cirosantilli.com/x86-paging: a specific tutorial, for which we want numbers
= Huge toplevel wiki
{numbered=0}
== h2
=== A specific tutorial
{numbered}
{scope}
==== h4
===== h5
= Huge toplevel wiki
Table of contents
* h2
* A specific tutorial
* 1. h4
* 1.1. h5
== h2
=== A specific tutorial
==== 1. h4
===== 1.1. h5
h4
is just 1.
rather than 1.1.1.
. We only show numberings relative to the first non-numbered header, because the 1.1.
wouldn't be very meaningful otherwise.In addition to the basic way of specifying header levels with an explicit level number as mentioned at Section 4.2.6. "Header , OurBigBook also supports a more indirect ID-based mechanism with the "
parent
argument of the \H
element.We hightly recommend using
parent
for all but the most trivial documents.For example, the following fixed level syntax:
is equivalent to the following ID-based version:
= My h1
== My h2 1
== My h2 2
=== My h3 2 1
= My h1
= My h2 1
{parent=my-h1}
= My h2 2
{parent=my-h1}
= My h3 2 1
{parent=my-h2-h}
The main advantages of this syntax are felt when you have a huge document with very large header depths. In that case:
- it becomes easy to get levels wrong with so many large level numbers to deal with. It is much harder to get an ID wrong.
- when you want to move headers around to improve organization, things are quite painful without a refactoring tool (which we intend to provide in the browser editor with preview), as you need to fix up the levels of every single header.If you are using the ID-based syntax however, you only have to move the chunk of headers, and change the
parent
argument of a single top-level header being moved.
Note that when the
because the second header has level
parent=
argument is given, the header level must be 1
, otherwise OurBigBook assumes that something is weird and gives an error. E.g. the following gives an error:
= My h1
== My h2
{parent=my-h1}
2
instead of the required = My h2
.When scopes are involved, the rules are the same as those of internal reference resolution, including the leading
/
to break out of the scope in case of conflicts.Like the
which is equivalent to:
\H
child
argument, parent
also performs ID target from title on the argument, allowing you to use the original spaces and capitalization in the target as in:
= Flying animal
= Bat
{parent=Flying animal}
= Flying animal
= Bat
{parent=flying-animal}
See also: Section 4.2.6.4.5.2. "Header explicit levels vs nesting design choice" for further rationale.
When mixing both
\H
parent
argument and scopes, things get a bit complicated, because when writing or parsing, we have to first determine the parent header before resolving scopes.As a result, the follow simple rules are used:
- start from the last header of the highest level
- check if the
{parent=XXX}
is a suffix of its ID - if not, proceed to the next smaller level, and so on, until a suffix is found
Following those rules for example, a file
will lead to the following header tree with
tmp.bigb
:
= h1
{scope}
= h1 1
{parent=h1}
{scope}
= h1 1 1
{parent=h1-1}
= h1 1 2
{parent=h1-1}
= h1 1 3
{parent=h1/h1-1}
= h1 2
{parent=h1}
{scope}
= h1 2 1
{parent=h1-2}
{scope}
= h1 2 1 1
{parent=h1-2/h1-2-1}
--log headers
:
= h1 tmp
== h2 1 tmp/h1-1
=== h3 1.1 tmp/h1-1/h1-1-1
=== h3 1.2 tmp/h1-1/h1-1-2
=== h3 1.3 tmp/h1-1/h1-1-3
== h2 2 tmp/h1-2
=== h3 2.1 tmp/h1-2/h1-2-1
==== h4 2.1.1 tmp/h1-2/h1-2-1/h1-2-1-1
Arguably, the language would be even saner if we did:
rather than having explicit levels as in
\H[My h1][
Paragraph.
\H[My h2][]
]
\H[1][My h1]
and so on.But we chose not to do it like most markups available because it leads to too many nesting levels, and hard to determine where you are without tooling.
Ciro later "invented" (?)
\H
parent
argument, which he feels reaches the perfect balance between the advantages of those two options.In some use cases, the sections under a section describe inseparable parts of something.
For example, when documenting an experiment you executed, you will generally want an "Introduction", then a "Materials" section, and then a "Results" section for every experiment.
On their own, those sections don't make much sense: they are always referred to in the context of the given experiment.
The problem is then how to get unique IDs for those sections.
One solution, would be to manually add the experiment ID as prefix to every subsection, as in:
= Experiments
See: \x[full-and-unique-experiment-name/materials]
== Introduction
== Full and unique experiment name
=== Introduction
{id=full-and-unique-experiment-name/introduction}
See our awesome results: \x[full-and-unique-experiment-name/results]
For a more general introduction to all experiments, see: \x[introduction].
=== Materials
{id=full-and-unique-experiment-name/materials}
=== Results
{id=full-and-unique-experiment-name/results}
but this would be very tedious.
To keep those IDs shorter, OurBigBook provides the
scope
boolean argument property of headers, which works analogously to C++ namespaces with the header IDs.Using
scope
, the previous example could be written more succinctly as:
= Experiments
See: \x[full-and-unique-experiment-name/materials]
== Introduction
== Full and unique experiment name
{scope}
=== Introduction
See our awesome results: \x[results]
For a more general introduction to all experiments, see: \x[/introduction].
=== Materials
=== Results
Note how:
- full IDs are automatically prefixed by the parent scopes prefixed and joined with a slash
/
- we can refer to other IDs withing the current scope without duplicating the scope. E.g.
\x[results]
in the example already refers to the IDfull-and-unique-experiment-name/materials
- to refer to an ID outside of the scope and avoid name conflicts with IDs inside of the current scope, we start a reference with a slash
/
So in the example above,\x[/introduction]
refers to the IDintroduction
, and notfull-and-unique-experiment-name/introduction
.
When nested scopes are involved, cross references resolution peels off the scopes one by one trying to find the closes match, e.g. the following works as expected:
Here OurBigBook:
= h1
{scope}
== h2
{scope}
=== h3
{scope}
\x[h2]
- first tries to loop for an
h1/h2/h3/h2
, sinceh1/h2/h3
is the current scope, but that ID does not exist - so it removes the
h3
from the current scope, and looks forh1/h2/h2
, which is still not found - then it removes the
h2
, leading toh1/h2
, and that one is found, and therefore is taken
Putting files in subdirectories of the build has the same effect as adding a scope to their top level header.
Notably, all headers inside that directory get the directory prepended to their IDs.
The toplevel directory is determined as described at: the toplevel index file.
For fun and profit.
Let's break this local link: ourbigbook.
When the toplevel header is given the
scope
property OurBigBook automatically uses the file path for the scope and heaves fragments untouched.For example, suppose that file
full-and-unique-experiment-name
contains:
= Full and unique experiment name
{scope}
== Introduction
== Materials
In this case, multi-file output will generate a file called instead of
full-and-unique-experiment-name.html
, and the URL of the subsections will be just:full-and-unique-experiment-name.html#introduction
full-and-unique-experiment-name.html#materials
full-and-unique-experiment-name.html#full-and-unique-experiment-name/introduction
full-and-unique-experiment-name.html#full-and-unique-experiment-name/materials
Some quick interactive cross file link tests:
When using
-S
, --split-headers
, cross references always point to non-split pages as mentioned at cross reference targets in split headers.If the
splitDefault
boolean argument is given however:- the split header becomes the default, e.g.
index.html
is now the split one, andnosplit.html
is the non-split one - the header it is given for, and all of its descendant headers will use the split header as the default internal cross target, unless the header is already rendered in the current page. This does not propagate across includes however.
For example, consider
and
Then the following links would be generated:
README.bigb
:
= Toplevel
{splitDefault}
\x[h2][toplevel to h2]
\x[notreadme][toplevel to notreadme]
\Include[notreadme]
== h2
notreadme.bigb
:
= Notreadme
\x[h2][notreadme to h2]
\x[notreadme][notreadme to notreadme h2]
== Notreadme h2
index.html
: split version ofREADME.bigb
, i.e. does not containh2
toplevel to h2
:h2.html
. Links to the split version ofh2
, sinceh2
is also affected by thesplitDefault
of its parent, and therefore links to it use the split version by defaulttoplevel to notreadme
:notreadme.html
. Links to non-split version ofnotreadme.html
since that header is notsplitDefault
, becausesplitDefault
does not propagate across includes
nosplit.html
non-split version ofREADME.bigb
, i.e. containsh2
toplevel to h2
:#h2
, because even thoughh2
issplitDefault
, that header is already present in the current page, so it would be pointless to reload the split onetoplevel to notreadme
:notreadme.html
h2.html
split version ofh2
fromREADME.bigb
notreadme.html
: non-split version ofnotreadme.bigb
notreadme to h2
:h2.html
, becauseh2
issplitDefault
notreadme to notreadme h2
:#notreadme-h2
notreadme-split.html
: split version ofnotreadme.bigb
notreadme to h2
:h2.html
, becauseh2
issplitDefault
notreadme to notreadme h2
:notreadme.html#notreadme-h2
, becausenotreadme-h2
is notsplitDefault
The major application of this if you like work with a huge
README.bigb
containing thousands of random small topics.Splitting those into separate source files would be quite laborious, as it would require duplicating IDs on the filename, and setting up includes.
However, after this README reaches a certain size, page loads start becoming annoyingly slow, even despite already loading large assets like images video videos only on hover or click: the annoying slowness comes from the loading of the HTML itself before the browser can jump to the ID.
And even worse: this README corresponds to the main index page of the website, which will make what a large number of users will see be that slowness.
Therefore, once this README reaches a certain size, you can add the
splitDefault
attribute to it, to make things smoother for readers.And if you have a smaller, more self-contained, and highly valuable tutorial such as cirosantilli.com/x86-paging, you can just split that into a separate
.bigb
source file.This way, any links into the smaller tutorial will show the entire page as generally desired.
And any links from the tutorial, back to the main massive README will link back to split versions, leading to fast loads.
This feature was implemented at: github.com/ourbigbook/ourbigbook/issues/131
Note that this huge README style is not recommended however. Ciro Santilli used to do it, but moved away from it. The currently recommended approach is to manually create not too large subtrees in each page. This way, readers can easily view several nearby sections without having to load a new page every time.
If given, add a custom suffix to the output filename of the header when using
-S
, --split-headers
.If the given suffix is empty, it defaults to
-split
.For example, given:
a
However, if we instead wrote:
it would not be placed under:
and if we set a custom one as:
it would go instead to:
= my h1
== my h2
--split-headers
conversion would normally place my h2
into a file called:
my-h2.html
== my h2
{splitSuffix}
my-h2-split.html
== my h2
{splitSuffix=asdf}
my-h2-asdf.html
This option is useful if the root of your website is written in OurBigBook, and you want to both:
- have a section that talks about some other project
- host the documentation of that project inside the project source tree
For example, cirosantilli.com with source at github.com/cirosantilli/cirosantilli.github.io has a quick section about OurBigBook: cirosantilli.com#ourbigbook.
Therefore, without a custom suffix, the split header version of that header would go to docs.ourbigbook.com, which would collide with this documentation, that is present in a separate repository: github.com/ourbigbook/ourbigbook.
Therefore a
splitSuffix
property is used, making the split header version fall under /ourbigbook-split
, and leaving the nicer /ourbigbook
for the more important project toplevel.If given on the the toplevel headers, which normally gets a suffix by default to differentiate from the non-split version, it replaces the default
-split
suffix with a custom one.For example if you had
then it would render to:
but if you used instead:
then it would instead be:
notindex.bigb
as:
= Not index
notindex-split.bigb
= Not index
{splitSuffix=asdf}
notindex-asdf.bigb
This option is similar to
\H
title2
argument but it additionally:- creates a new ID that you can refer to, and renders it with the alternate chosen title
- the rendered ID on cross references is the same as what it is a synonym for
- the synonym header is not rendered at all, including in the table of contents
- when using
-S
,--split-headers
, a redirect output file is generated from the synonym to the main ID
Example:
renders something like:
Furthermore, if
which contains a redirection from
= Parent
== GNU Debugger
{c}
= GDB
{c}
{synonym}
I like to say \x[gdb] because it is shorter than \x[gnu-debugger].
= GNU Debugger
I like to say \a[#gnu-debugger][GDB] because it is shorter than \x[#gnu-debugger][GNU Debugger].
-S
, --split-headers
is used, another file is generated:
gdb.html
gdb.html
to gnu-debugger.html
.Implemented at: github.com/ourbigbook/ourbigbook/issues/114
Contains the main content of the header. The insane syntax:
is equivalent to the sane:
and in both cases
= My title
\H[1][My title]
My title
is the title argument.The
title
argument is also notably used for automatic ID from title.If a non-toplevel macro has the Note how those rules leave non-ASCII Unicode characters untouched, except for:as capitalization and determining if something "is a letter or not" in those cases can be tricky.
title
argument is present but no explicit id
argument is given, an Element ID is created automatically from the title
, by applying the following transformations:- do a
id
output format conversion on the title to remove for example any HTML tags that would be present in the conversion output - convert all characters to lowercase. This uses JavaScript case conversion. Note that this does convert non-ASCII characters to lowercase, e.g.
É
toé
. - if
id
normalize
latin
istrue
(the default) do Latin normalization. This converts e.g.é
toe
. - if
id
normalize
punctuation
istrue
(the default) do Punctuation normalization. This converts e.g.+
toplus
. - convert consecutive sequences of all non
a-z0-9
ASCII characters to a single hyphen-
. Note that this leaves non-ASCII characters untouched. - strip leading or trailing hyphens
- capitalization changes wher applicable, e.g.
É
toé
For toplevel headers, see: the ID of the first header is derived from the filename.
So for example, the following automatic IDs would be generated: Table 2. "Examples of automatically generated IDs".
title | id | latin normalization | punctuation normalization | comments |
---|---|---|---|---|
My favorite title | my-favorite-title | |||
Ciro's markdown is awesome | ciro-s-markdown-is-awesome | ' is an ASCII character, but it is not in a-z0-9 , therefore it gets converted to a hyphen - | ||
É你 | e你 | true | The Latin acute accented e , É , is converted to its lower case form é as per the JavaScript case conversion.The Chinese character 你 is left untouched as Chinese characters have no case, and no ASCII analogue. | |
É你 | é你 | false | Same as the previous, but é is not converted to e since Latin normalization is turned off. | |
C++ is great | c-plus-plus-is-great | true | This is the effect of Punctuation normalization. | |
I love dogs. | i-love-dogs | love is extracted from the italic tags <i>love</i> with id output format conversion. | ||
β Centauri | beta-centauri | Our Latin normalization is amazing and knows Greek! |
For the toplevel header, its ID is derived from the basename of the OurBigBook file without extension instead of from the
title
argument.TODO:
- maybe we should also remove some or all non-ASCII punctuation. All can be done with
\\p{IsPunctuation}
: stackoverflow.com/questions/13925454/check-if-string-is-a-punctuation-character but we need to check that we really want to remove all of them.
This conversion type is similar to Automatic ID from title, but it is used in certain cases where we are targeting IDs rather than setting them, notably:
Unlike
which renders something like:
Note how we added the synonym to the title only when it is not just a simple flexion variant, since
\H
title2
argument, the synonym does not show up by default next to the title. This is because we sometimes want that, and sometimes not. To make the title appear, you can simply add an empty title2
argument to the synonym header as in:
= GNU Debugger
{c}
= GDB
{c}
{synonym}
{title2}
= Quantum computing
= Quantum computer
{synonym}
= GNU Debugger (GDB)
= Quantum computing
Quantum computing (Quantum computer)
would be kind of useless would be kind of useless.Same as
is equivalent in every way to:
\x
child
argument but in the opposite direction, e.g.:
== Mammal
=== Bat
{tag=flying-animal}
=== Cat
== Flying animal
== Mammal
=== Bat
=== Cat
== Flying animal
{child=bat}
Naming rationale:So
parent
as the opposite of child is already taken to be then "main parent" via the "\H
parent
argument"- we could have renamed the
\H
child
argument totags
as in "this header tags that one", but it would be a bit confusingtags
vstag
child
vs tag
it is for now.You generally want to use
tag
instead of the \H
child
argument because otherwise some very large header categories are going to contain Huge lists of children, which is not very nice when editing.It is possible to enforce the
\H
child
argument or the \H
tag
argument in a given project with the lint
h-tag
option.The
title2
argument can be given to any element that has the title
argument.Its usage is a bit like the
description=
argument of images, allowing you to add some extra content to the header without affecting its ID.Unlike
description=
however, title2
shows up on all full
references, including appearances in the table of contents, which make it more searchable.Its primary use cases are:
- give acronyms, or other short names names of fuller titles such as mathematical/programming notationOne primary reason to not use the acronyms as the main section name is to avoid possible ID ambiguities with other acronyms.
- give the header in different languages
For example, given the OurBigBook input:
the rendered output looks like:
= Toplevel
The Toc follows:
== North Atlantic Treaty Organization
{c}
{title2=NATO}
\x[north-atlantic-treaty-organization]
\x[north-atlantic-treaty-organization]{full}
= Toplevel
The ToC follows:
* North Atlantic Treaty Organization (NATO)
== North Atlantic Treaty Organization (NATO)
North Atlantic Treaty Organization
Section 1. "North Atlantic Treaty Organization (NATO)"
Related alternatives to
title2
include:\H
disambiguate
argument when you do want to affect the ID to remove ambiguities\H
synonym
argument
Parenthesis are added automatically around all rendered
title2
.The
title2
argument has a special meaning when applied to a header with the \H
synonym
argument, see \H
title2
argument of a synonym header.When the
\H
toplevel
argument is set, the header and its descendants will be automatically output to a separate file, even without -S
, --split-headers
.For example given:
animal.bigb
= Animal
== Vertebrate
=== Dog
{toplevel}
==== Bulldog
== Invertebrate
and if you convert as:
we get the following output files:
ourbigbook animal.bigb
animal.html
: contains the headers: "Animal", "Vertebrate" and "Invertebrate", but not "Dog" and "Bulldog"dog.html
: contains only the headers: "Dog" and "Bulldog"
This option is intended to produce output identical to using includes and separate files, i.e. the above is equivalent to:
animal.bigb
= Animal
== Vertebrate
\Include[dog]
== Invertebrate
dog.bigb
= Dog
{toplevel}
== Bulldog
Or in other words: the toplevel header of each source file gets
{toplevel}
set implicitly for it by default.This design choice might change some day. Arguably, the most awesome setup is on in which source files and outputs are completely decoupled. OurBigBook Web also essentially wants this, as ideally we want to store one source per header there in each DB entry. We shall see.
If given, show a link to the Wikipedia article that corresponds to the header.
If a value is not given, automatically link to the Wiki page that matches the header exactly with spaces converted to underscores.
Here is an example with an explicit wiki argument:
==== Tiananmen Square
{wiki=Tiananmen_Square}
which looks like:
or equivalently with the value deduced from the title:
= Tiananmen Square
{wiki}
which looks like:
You can only link to subsections of wiki pages with explicit links as in:
= History of Tiananmen Square
{{wiki=Tiananmen_Square#History}}
which looks like:
Note that in this case, you either need a literal argument
to avoid the creation of an insane topic link with a single word.
{{}}
or to explicitly escape the #
character as in:
= History of Tiananmen Square
{wiki=Tiananmen_Square\#History}
Also note that Wikipedia subsections are not completely stable, so generally you would rather want to link to a permalink with a full URL as in:
Note that in this case escaping the
= Artificial general intelligence
{wiki=https://en.wikipedia.org/w/index.php?title=Artificial_general_intelligence&oldid=1192191193#Tests_for_human-level_AGI}
#
is not necessary because it is part of the insane link that starts at https://
.OurBigBook automatically adds a table of contents at the end of the first non-toplevel header of every document.
For example, on a standard document with a single toplevel header:
the ToC is rendered something like:
= Animal
Animals are cute!
== Dog
== Cat
= Animal
Animals are cute!
Table of Contents
* Dog
* Cat
== Dog
== Cat
The ToC ignores the toplevel header if you have one.
For when you want a quick outline of the header tree on the terminal, also consider the
--log headers
option.To the left of table of content entries you can click on an open/close icon to toggle the visibility of different levels of the table of contents.
The main use case covered by the expansion algorithm is as follows:
- the page starts with all nodes open to facilitate Ctrl + F queries
- if you click on a node in that sate, you close all its children, to get a summarized overview of the contents
- if you click one of those children, it opens only its own children, so you can interactively continue exploring the tree
The exact behaviour is:
- the clicked node is open:
- state 1 all children are closed. Action: open all children recursively, which puts us in state 2
- state 2: not all children are closed. Action close all children, which puts us in state 1. This gives a good overview of the children, without any children of children getting in the way.
- state 3: the clicked node is closed (not showing any children). Action: open it to show all direct children, but not further descendants (i.e. close those children). This puts us in state 1.
Note that those rules make it impossible to close a node by clicking on it, the only way to close a node os to click on its parent, the state transitions are:but we feel that it is worth it to do things like this to cover the main use case described above without having to add two buttons per entry.
- 3 -> 1
- 1 -> 2
- 2 -> 1
Clicking on the link from a header up to the table of contents also automatically opens up the node for you in case it had been previously closed manually.
OurBigBook adds some header metadata to the toplevel header at the bottom of each page. This section describes this metadata.
Although the table of contents has a macro to specify its placement, it is also automatically placed at the bottom of the page, and could be considered a header metadata section.
Lists other sections that link to the current section.
E.g. in:
the page since those pages link to the
= tmp
== tmp 1
=== tmp 1 1
=== tmp 1 2
\x[tmp-1]
== tmp 2
\x[tmp-1]
tmp-1.html
would contain a list of incoming links as:tmp-1-2
tmp-2
tmp-1
ID.Lists sections that are secondary children of the current section, i.e. tagged under the current section.
The main header tree hierarchy descendants already show under the table of contents instead.
E.g. in:
the tagged sections for:
= tmp
== Mammal
== Flying
== Animal
=== Bat
{tag=mammal}
{tag=flying}
=== Bee
{tag=flying}
=== Dog
{tag=mammal}
- Mammal will contain Bat and Dog
- Flying will contain Bat and Bee
Shows a list of ancestors of the page. E.g. in:
the ancestor lists would be for:so we see that this basically provides a type of breadcrumb navigation.
= Asia
== China
=== Beijing
==== Tiananmen Square
=== Hong Kong
- Hong Kong: China, Asia
- Tiananmen Square: Beijing, China, Asia
- Beijing: China, Asia
- China: Asia
Used to represent a thematic break between paragraph-level elements:
She pressed the button. Just like that, everything was over.
\Hr
The next morning was a gloomy one. Nobody said a word.
which renders as:
She pressed the button. Just like that, everything was over.The next morning was a gloomy one. Nobody said a word.
This macro corresponds to a misfeature of HTML/Markdown, and is not encouraged. We instead recommend creating smaller more specific headers instead to split sections, as this has all the usual advantages of allowing metadata to be associated to the header, such as
-S
, --split-headers
, topic, liked and discussions.But they people asked, and they got it.
A block image with capital 'i'
This exemplifies the following parameters:For further discussion on the effects of ID see: Section 4.2.8.1. "Image ID".
Image
showcasing most of the image properties Figure 12. "The title of my image".
Have a look at this amazing image: \x[image-my-test-image].
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my image}
{id=image-my-test-image}
{width=600}
{height=200}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
{description=The description of my image.}
which renders as:
Have a look at this amazing image: Figure 12. "The title of my image".
title
: analogous to the\H
title
argument. Shows up preeminently, and sets a default ID if one is not given. It is recommended that you don't add a period.
to it, as that would show in cross references- image
description
argument source
: a standardized way to credit an image by linking to a URL that contains further image metadata
And this is how you make an inline image inline one with lower case
i
:
My inline \image[Tank_man_standing_in_front_of_some_tanks.jpg][test image] is awesome.
which renders as:
Inline images can't have captions.And now for an image outside of
\OurBigBookExample
to test how it looks directly under the \Toplevel
implicit macro: Figure 13.Here is an image without a description but with an ID so we can link to it: Figure 14.
This works because
Have a look at this amazing image: \x[image-my-test-image-2].
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{id=image-my-test-image-2}
which renders as:
Have a look at this amazing image: Figure 14.
full
is the default cross reference style for Image
, otherwise the link text would be empty since there is no title
, and OurBigBook would raise an error.OurBigBook can optionally deduce the title from the basename of the
src
argument if the titleFromSrc
boolean argument is given, or if title-from-src
is set as the default media provider for the media type:
Have a look at this amazing image: \x[image-tank-man-standing-in-front-of-some-tanks].
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{titleFromSrc}
which renders as:
Have a look at this amazing image: Figure 15. "Tank man standing in front of some tanks.".
If the image has neither ID nor title nor description nor
source
, then it does not get a caption at all:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
If the image does not have an ID nor title, then it gets an automatically generated ID, just like every other OurBigBook output HTML element, and it is possible for readers to link to that ID on the rendered version, e.g. as:
Note that the
#_123
123
is not linked to the Figure <number>.
, but just a sequential ID that runs over all elements.This type of ID is of course not stable across document revisions however, since if an image is added before that one, the link will break. So give an ID or title for anything that you expect users to link to.
Also, it is not possible to link to such images with an cross reference, like any other OurBigBook element with autogenerated temporary IDs.
Another issue to consider is that in paged output formats like PDF, the image could float away from the text that refers to the image, so you basically always want to refer to image by ID, and not just by saying "the following image".
We can also see that such an image does not increment the Figure count:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{id=image-my-test-image-count-before}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{id=image-my-test-image-count-after}
which renders as:
If the image has any visible metadata such as
source
or description
however, then the caption does show and the Figure count gets incremented:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{description=This is the description of my image.}
which renders as:
If you are making a limited repository that will not have a ton of images, then you can get away with simply git tracking your images in the main repository.
With this setup, no further action is needed. For example, with a file structure of:
just use the image from
./README.bigb
./Tank_man_standing_in_front_of_some_tanks.jpg
README.bigb
as:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
However, if you are making a huge tutorial, which can have a huge undefined number of images (i.e. any scientific book), then you likely don't want to git track your images in the git repository.
A generally better alternative is to store images in a separate media repository, and especially store images in a separate media repository and track it as a git submodule.
In this approach, you create a separate GitHub repository in addition to the main one containing the text to contain only media such as images.
This approach is more suitable than store images inside the repository itself if you are going to have a lot of images.
When using this approach, you could of course just point directly to the final image URL, e.g. as in:
but OurBigBook allows you use configurations that allow you to enter just the image basename:
\Image[https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png]
which renders as:
Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png
which we will cover next.In order to get this to work, the recommended repository setup is:The directory and repository names are not mandatory, but if you place media in
./main-repo/.git
: main repository at github.com/username/main-repo./main-repo/data/media/.git/
: media repository at github.com/username/main-repo-media, and wheredata/
is gitignored.
data/media
and name its repository by adding the *-media
suffix, then ourbigbook
will handle everything for you without any further configuration in media-providers
.This particular documentation repository does have a different setup as can be seen from its ourbigbook.json. Then, when everything is setup correctly, we can refer to images simply as:
In this example, we also needed to set
\Image[Fundamental_theorem_of_calculus_topic_page_arrow_to_full_article.png]{provider=github}
which renders as:
{provider=github}
explicitly since it was not set as the default image provider in our ourbigbook.json
. In most projects however, all of your images will be in the default repository, so this won't be needed.provider
must not be given when a full URL is given because we automatically detect providers from URLs, e.g.:
\Image[https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/Fundamental_theorem_of_calculus_topic_page.png]{provider=github}
TODO implement:
ourbigbook
will even automatically add and push used images in the my-tutorial-media
repository for you during publishing!You should then use the following rules inside This way, even though the repositories are not fully in sync, anyone who clones the latest version of the
my-tutorial-media
:- give every file a very descriptive and unique name as a full English sentence
- never ever delete any files, nor change their content, unless it is an improvement in format that does change the information contained of the image TODO link to nice Wikimedia Commons guideline page
*-media
directory will be able to view any version of the main repository.Then, if one day the media repository ever blows up GitHub's limit, you can just migrate the images to another image server that allows arbitrary basenames, e.g. AWS, and just configure your project to use that new media base URL with the
media-providers
option.The reason why images should be kept in a separate repository is that images are hundreds or thousands of times larger than hand written text.
Therefore, images could easily fill up the maximum repository size you are allowed: webapps.stackexchange.com/questions/45254/file-size-and-storage-limits-on-github#84746 and then what will you do when GitHub comes asking you to reduce the repository size?
Git LFS is one approach to deal with this, but we feel that it adds too much development overhead.
This is likely the sanest approach possible, as it clearly specifies which media version matches which repository version through the submodule link.
Furthermore, it is possible to make the submodule clone completely optional by setting things up as follows. For your OurBigBook project
yourname/myproject
create a yourname/myproject-media
with the media, and track it as a submodule under yourname/myproject/media
.Then, add to
media-providers
:
"media-providers": {
"github": {
"default-for": ["image", "video"],
"path": "media",
"remote": "yourname/myproject-media"
}
}
Now, as mentioned at
media-providers
, everything will work beautifully:ourbigbook .
local conversion will use images frommedia/
if it exists, e.g.:will render\Image[myimage.jpg]
media/myimage.jpg
. So after cloning the submodule, you will be able to see the images on the rendered pages without an internet connection.But if the submodule is not cloned, not problem, renders will detect that and automatically use GitHub images.Then, when you do:the following happen:ourbigbook --publish
\Image[myimage.jpg]
uses the GitHub URL- automatically push
media/
to GitHub in case there were any updates - also, that directory is automatically gitignore, so it won't be pushed as part of the main render and thus duplicate things
Wikimedia Commons is another great possibility to upload your images to:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Gel_electrophoresis_insert_comb.jpg/450px-Gel_electrophoresis_insert_comb.jpg]
{source=https://commons.wikimedia.org/wiki/File:Gel_electrophoresis_insert_comb.jpg}
which renders as:
OurBigBook likes Wikimedia Commons so much that we automatically parse the image URL and if it is from Wikimedia Commons, automatically deduce the
source
for you. So the above image renders the same without the source
argument:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
which renders as:
And like for non-Wikimedia images, you can automatically generate a
title
from the src
by setting the titleFromSrc
boolean argument or if title-from-src
is set as the default media provider for the media type:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
{titleFromSrc}
which renders as:
And a quick test for a more complex thumb resized URL:
\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Gel_electrophoresis_insert_comb.jpg/450px-Gel_electrophoresis_insert_comb.jpg]
which renders as:
If you really absolutely want to turn off the
but you don't want to do that for the most commonly Wikimedia Commons used license of CC BY+, do you? :-)
source
, you can explicitly set:
\Image[https://upload.wikimedia.org/wikipedia/commons/5/5b/Gel_electrophoresis_insert_comb.jpg]
{source=}
which renders as:
Upsides of using Wikimedia Commons for your images:Downsides:
- makes it easier for other writers to find and reuse your images
- automatically generates resized versions of the uploaded images into several common dimensions so you can pick the smallest one that fits your desired image height to reduce bandwidth usage
- if you have so many images that they would blow even the size of a separate media repository, this will still work
- forces you to use the Creative Commons license
- requires the content to be educational in nature
- uploading a bunch of images to Wikimedia Commons does feel a bit more laborious than it should because you have to write down so much repeated metadata for them
We do this by default because OurBigBook is meant to allow producing huge single page documents like Ciro likes it, and in this way:
- images that the user is looking at will load first
- we save a lot of bandwidth for the user who only wants to browse one section
TODO: maybe create a mechanism to disable this for the entire build with
ourbigbook.json
.For the love of God, there is no standardized for SVG to set its background color without a rectangle? stackoverflow.com/questions/11293026/default-background-color-of-svg-root-element
viewport-fill
was just left in limbo?And as a result, many many many SVG online images that you might want to reuse just rely on white pages and don't add that background rectangle.
Therefore for now we just force white background on our default CSS of block images, which is what most SVGs will work with. Otherwise, you can lose the entire image to our default black background.
For inline images however, a white background would also be very distracting compared to the nearby inline text, and it would prevent the use case of making rounded smileys, so for now we are just not forcing the background color in that case.
At some point we might just add a
color
argument to set the background color to an arbitrary value so that authors can decide what is better for each image.TODO implement: mechanism where you enter a textual description of the image inside the code body, and it then converts to an image, adds to the
-media
repo and pushes all automatically. Start with dot.Many image arguments arguments are shared between both block and inline images, but not all.
Adds a border around the image. This can be useful to make it clearer where images start and end when the image background color is the same as the background color of the OurBigBook document.
\Image[logo.svg]
{border}
{height=150}
{title=Logo of the OurBigBook Project with a border around it}
which renders as:
The
description
argument similar to the image title
argument argument, but allows allowing longer explanations without them appearing in cross references to the image.For example, consider:
In this example, the reference
and does not include the description, which only shows on the image.
See this image: \x[image-description-argument-test-1].
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=Tank man standing in front of some tanks}
{id=image-description-argument-test-1}
{description=Note how the tanks are green.}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
which renders as:
See this image: Figure 25. "Tank man standing in front of some tanks".
\x[image-description-argument-test-1]
expands just to
Tank man standing in front of some tanks
The description can be as large as you like. If it gets really large however, you might want to consider moving the image to its own header to keep things slightly saner. This will be especially true after we eventually do: github.com/ourbigbook/ourbigbook/issues/180.
If the description contains any element that would take its own separate line, like multiple paragraphs or a list, we automatically add a line grouping the description with the corresponding image to make that clearer, otherwise it can be hard to know which title corresponds to a far away image. Example with multiple paragraphs:
Stuff before the image.
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=Tank man standing in front of some tanks}
{id=image-description-argument-test-2}
{source=https://en.wikipedia.org/wiki/File:Tianasquare.jpg}
{description=Note how the tanks are green.
But the shirt is white.}
Stuff after the image description.
which renders as:
Stuff before the image.Stuff after the image description.
We recommend adding a period or other punctuation to the end of every description.
Analogous to the
\a
external
argument when checking if the image src
argument exists or not.By default, we fix image heights to When the viewport is narrow enough, mobile CSS takes over and forces block images to fill 100% of the page width instead, removing the scrollbar.
height=315
, and let the width
be calculated proportionally once the image loads. We therefore ignore the actual image size. This is done to:- prevent reflows as the page loads images and can determine their actual sizes, especially is the user opens the page at a given ID in the middle of the page
- create a more uniform media experience by default, unless a custom image size is actually needed e.g. if the image needs to be larger
Inline images on the other hand never get a horizontal scrollbar, they are just always capped at viewport width.
When the
height
argument is given, it changes that default height. Width is still just calculated proportionally to this new custom height.\Image[logo.svg]
{height=150}
which renders as:
\Image[logo.svg]
{height=550}
which renders as:
Here's a very long test image:
If given, make clicking an image go to the specified URL rather than the image's URL as is the default.
By default, clicking on a rendered image links to the URL of the image itself. E.g. clicking:
would open Tank_man_standing_in_front_of_some_tanks.jpg as produces
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
which renders as:
img
surrounded by something like a href="Tank_man_standing_in_front_of_some_tanks.jpg"
.If insetad we want the image to point to a custom URL, e.g. ourbigbook.com we could instead write:
and now clicking the image leads to ourbigbook.com instead.
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]{link=https://ourbigbook.com}
which renders as:
Where the image was taken from, e.g.:
\Image[https://upload.wikimedia.org/wikipedia/commons/6/68/Akha_cropped_hires.JPG]
{title=A couple}
{source=https://en.wikipedia.org/wiki/Human}
which renders as:
The
source
is automatically inferred for certain known websites, e.g.:- Wikimedia
https://upload.wikimedia.org/wikipedia/commons
Example:\Image[https://upload.wikimedia.org/wikipedia/commons/6/68/Akha_cropped_hires.JPG] {title=A couple no source}
which renders as:
The address of the image, e.g. in:
the
\Image[image.png]
src
is image.png
.Analogous to the
\a
href
argument.Analogous to the
\H
title
argument.This argument is meant to be analogous to the Image
height
argument but for images.Usage of this argument is generally discouraged, as we always set the default image height by default, so that also passing a width is either unnecessary or may lead to changes in the image's correct aspect ratio.
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{width=150}
which renders as:
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{width=550}
which renders as:
The
\Include
macro allows including an external OurBigBook headers under the current header.It exists to allow optional single page HTML output while still retaining the ability to:
- split up large input files into multiple files to make renders faster during document development
- suggest an optional custom output split with one HTML output per OurBigBook input, in order to avoid extremely large HTML pages which could be slow to load
\Include
takes one mandatory argument: the ID of the section to be included, much like cross references.There is however one restriction: only the toplevel headers can be pointed to. This restriction allows us to easily find the included file in the filesystem, and dispenses the need to do a first
./ourbigbook
run to generate the ID database. This works because the ID of the first header is derived from the filename.Headers of the included document are automatically shifted to match the level of the child of the level where they are being included.
If
--embed-includes
is given, the external document is rendered embedded into the current document directly, essentially as if the source had been copy pasted (except for small corrections such as the header offsets).Otherwise, the following effects happen:
- The headers of the included tree appear in the table of contents of the document as links to the corresponding external files.This is implemented simply by reading a previously generated database file much like cross file reference internals, which avoids the slowdown of parsing all included files every time.As a result, you have to do an initial parse of all files in the project to extract their headers however, just as you would need to do when linking to those headers.
- the include itself renders as a link to the included document
--embed-includes
Here is an example of inclusion of the files
The above is the recommended and slightly insaner version of:
The insaner version is a bit insaner because the
not-readme.bigb
and not-readme-2.bigb
:
\Include[not-readme]
\Include[not-readme-2]
\Include[not-readme-with-scope]
\Include[not-readme]
\Include[not-readme-2]
\Include[not-readme-with-scope]
\Include
magically discards the following newline node that follows it if it just a plaintext node containing exactly a newline. With a double newline, the newline would already have been previously taken out on the lexing stage as part of a paragraph.Section 4.2.9.3. "
\Include
example" shows what those actually render like.When you are in a subdirectory, include resolution just is simply relative to the subdirectory. E.g. we could do:
subdir/index.bigb
= Subdir
\Include[notindex]
\Include[subdir2/notindex]
subdir/notindex.bigb
= Notindex
subdir/subdir2/notindex.bigb
= Notindex
It is not currently possible to include from ancestor directories: github.com/ourbigbook/ourbigbook/issues/214.
This option is analogous to
\H
parent
argument, but for includes.For example, consider you have:
and now you want to split
= Animal
== Dog
== Cat
== Bat
Cat
to cat.bigb
.If you wrote:
Cat would be a child of Dog, since that is the previous header, which is not what we want.
= Animal
== Dog
\Include[cat]
== Bat
Instead, we want to write:
and now Cat will be a child of Animal as desired.
= Animal
== Dog
\Include[cat]{parent=animal}
== Bat
Implemented at: github.com/ourbigbook/ourbigbook/issues/127
This shows what includes render as.
Some \i[italic] text.
which renders as:
Some italic text.
The
JsCanvasDemo
macro allows you to create interactive HTML/JavaScript canvas demos easily.These demos:so you can stuff as many of them as you want on a page, and they won't cause the reader's CPU to fry an egg.
- only start running when the user scrolls over them for the first time
- stop automatically when they leave the viewport
\JsCanvasDemo[[
new class extends OurbigbookCanvasDemo {
init() {
super.init('hello');
this.pixel_size_input = this.addInputAfterEnable(
'Pixel size',
{
'min': 1,
'type': 'number',
'value': 1,
}
);
}
draw() {
var pixel_size = parseInt(this.pixel_size_input.value);
for (var x = 0; x < this.width; x += pixel_size) {
for (var y = 0; y < this.height; y += pixel_size) {
var b = ((1.0 + Math.sin(this.time * Math.PI / 16)) / 2.0);
this.ctx.fillStyle =
'rgba(' +
(x / this.width) * 255 + ',' +
(y / this.height) * 255 + ',' +
b * 255 +
',255)'
;
this.ctx.fillRect(x, y, pixel_size, pixel_size);
}
}
}
}
]]
which renders as:
new class extends OurbigbookCanvasDemo { init() { super.init('hello'); this.pixel_size_input = this.addInputAfterEnable( 'Pixel size', { 'min': 1, 'type': 'number', 'value': 1, } ); } draw() { var pixel_size = parseInt(this.pixel_size_input.value); for (var x = 0; x < this.width; x += pixel_size) { for (var y = 0; y < this.height; y += pixel_size) { var b = ((1.0 + Math.sin(this.time * Math.PI / 16)) / 2.0); this.ctx.fillStyle = 'rgba(' + (x / this.width) * 255 + ',' + (y / this.height) * 255 + ',' + b * 255 + ',255)' ; this.ctx.fillRect(x, y, pixel_size, pixel_size); } } } }
And another one showing off some WebGL:
new class extends OurbigbookCanvasDemo {
init() {
super.init('webgl', {context_type: 'webgl'});
this.ctx.viewport(0, 0, this.ctx.drawingBufferWidth, this.ctx.drawingBufferHeight);
this.ctx.clearColor(0.0, 0.0, 0.0, 1.0);
this.vertexShaderSource = `
#version 100
precision highp float;
attribute float position;
void main() {
gl_Position = vec4(position, 0.0, 0.0, 1.0);
gl_PointSize = 64.0;
}
`;
this.fragmentShaderSource = `
#version 100
precision mediump float;
void main() {
gl_FragColor = vec4(0.18, 0.0, 0.34, 1.0);
}
`;
this.vertexShader = this.ctx.createShader(this.ctx.VERTEX_SHADER);
this.ctx.shaderSource(this.vertexShader, this.vertexShaderSource);
this.ctx.compileShader(this.vertexShader);
this.fragmentShader = this.ctx.createShader(this.ctx.FRAGMENT_SHADER);
this.ctx.shaderSource(this.fragmentShader, this.fragmentShaderSource);
this.ctx.compileShader(this.fragmentShader);
this.program = this.ctx.createProgram();
this.ctx.attachShader(this.program, this.vertexShader);
this.ctx.attachShader(this.program, this.fragmentShader);
this.ctx.linkProgram(this.program);
this.ctx.detachShader(this.program, this.vertexShader);
this.ctx.detachShader(this.program, this.fragmentShader);
this.ctx.deleteShader(this.vertexShader);
this.ctx.deleteShader(this.fragmentShader);
if (!this.ctx.getProgramParameter(this.program, this.ctx.LINK_STATUS)) {
console.log('error ' + this.ctx.getProgramInfoLog(this.program));
return;
}
this.ctx.enableVertexAttribArray(0);
var buffer = this.ctx.createBuffer();
this.ctx.bindBuffer(this.ctx.ARRAY_BUFFER, buffer);
this.ctx.vertexAttribPointer(0, 1, this.ctx.FLOAT, false, 0, 0);
this.ctx.useProgram(this.program);
}
draw() {
this.ctx.clear(this.ctx.COLOR_BUFFER_BIT);
this.ctx.bufferData(this.ctx.ARRAY_BUFFER, new Float32Array([Math.sin(this.time / 60.0)]), this.ctx.STATIC_DRAW);
this.ctx.drawArrays(this.ctx.POINTS, 0, 1);
}
}
Equivalent fully sane with explicit container:
\Ul[
\L[a]
\L[b]
\L[c]
]
which renders as:
- a
- b
- c
The explicit container is required if you want to pass extra arguments properties to the
This is the case because without the explicit container in an implicit
ul
list macro, e.g. a title and an ID: Ul 1:
\Ul
{id=list-my-id}
[
\L[a]
\L[b]
\L[c]
]
which renders as:
- a
- b
- c
ul
list, the arguments would stick to the last list item instead of the list itself.It is also required if you want ordered lists:
\Ol[
\L[first]
\L[second]
\L[third]
]
which renders as:
- first
- second
- third
Insane nested list with two space indentation:
The indentation must always be exactly equal to two spaces, anything else leads to errors or unintended output.
* a
* a1
* a2
* a2
* b
* c
which renders as:
- a
- a1
- a2
- a2
- b
- c
Equivalent saner nested lists with implicit containers:
\L[
a
\L[a1]
\L[a2]
\L[a2]
]
\L[b]
\L[c]
which renders as:
- a
- a1
- a2
- a2
- b
- c
Insane list item with a paragraph inside of it:
* a
* I have
Multiple paragraphs.
* And
* also
* a
* list
* c
which renders as:
- a
I haveMultiple paragraphs.
- And
- also
- a
- list
- c
Equivalent sane version:
\L[a]
\L[
I have
Multiple paragraphs.
\L[And]
\L[also]
\L[a]
\L[list]
]
\L[c]
which renders as:
- a
I haveMultiple paragraphs.
- And
- also
- a
- list
- c
Insane lists may be escaped with a backslash as usual:
\* paragraph starting with an asterisk.
which renders as:
* paragraph starting with an asterisk.
You can also start insane lists immediately at the start of a positional or named argument, e.g.:
\P[* a
* b
* c
]
which renders as:
- a
- b
- c
And now a list outside of
\OurBigBookExample
to test how it looks directly under the \Toplevel
implicit macro:- a
- b
- c
Via KaTeX server side, oh yes!
Inline math is done with the dollar sign (
and block math is done with two or more dollar signs (
$
) insane macro shortcut:
My inline $\sqrt{1 + 1}$ is awesome.
which renders as:
My inline is awesome.
$$
):
$$
\sqrt{1 + 1} \\
\sqrt{1 + 1}
$$
which renders as:
The sane version of inline math is a lower case
and the sane version of block math is with an upper case
m
:
My inline \m[[\sqrt{1 + 1}]] is awesome.
which renders as:
My inline is awesome.
M
:
\M[[
\sqrt{1 + 1} \\
\sqrt{1 + 1}
]]
which renders as:
The capital vs lower case theme is also used in other elements, see: block vs inline macros.
In the sane syntax, as with any other argument, you have to either escape any closing square brackets
or with the equivalent double open and close:
]
with a backslash \
:
My inline \m[1 - \[1 + 1\] = -1] is awesome.
which renders as:
My inline is awesome.
My inline \m[[1 - [1 + 1] = -1]] is awesome.
Equation IDs and titles and linking to equations works identically to images, see that section for full details. Here is one equation reference example that links to the following insane syntax equation: Equation 7. "My first insane equation":
$$
\sqrt{1 + 1}
$$
{title=My first insane equation}
which renders as:
and the sane equivalent Equation 8. "My first sane equation":
\M{title=My first sane equation}[[
\sqrt{1 + 1}
]]
which renders as:
Here is a raw one just to test the formatting outside of a
ourbigbook_comment
:
Here is a very long math equation:
See the: <equation Pytogoras theorem>.
$$
c = \sqrt{a^2 + b^2}
$$
{title=Pytogoras theorem}
{description=This important equation allows us to find the distance between two points.}
which renders as:
See the: Equation 11. "Pytogoras theorem".
Example:
See the: <equation Riemann zeta function>.
$$
\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}
$$
{title=Riemann zeta function}
which renders as:
See the: Equation 12. "Riemann zeta function".
OurBigBook ships with several commonly used math macros enabled by default.
The full list of built-in macros can be seen at: default.tex.
Here's one example of using
\dv
from the physics
package for derivatives:
$$
\dv{x^2}{x} = 2x
$$
which renders as:
Our goal is to collect the most popular macros from the most popular pre-existing LaTeX packages and make them available with this mechanism.
The built-in macros are currently only available on OurBigBook CLI and OurBigBook Web, not when using the JavaScript API directly. We should likely make that possible as well at some point.
If your project has multiple
.bigb
input files, you can share Mathematics definitions across all files by adding them to the ourbigbook.tex
file on the toplevel directory.For example, if
then from any
ourbigbook.tex
contains:
\newcommand{\foo}[0]{bar}
.bigb
file we in the project can use:
$$
\foo
$$
Note however that this is not portable to OurBigBook Web and will likely never be, as we want Web source to be reusable across authors. So the ony way to avoid macro definition conflicts would be to have a namespace system in place, which sounds hard/impossible.
Ideally, you should only use this as a temporary mechanism while you make a pull request to modify the built-in math macros :-)
Besides using
ourbigbook.tex
, you can also define your own math macros directly in the source code.This is generally fragile however because it doesn't work:
- across headers on OurBigBook Web
- across different source files on OurBigBook CLI. That can be worked around with
ourbigbook.tex
on CLI, butourbigbook.tex
does not work on Web either.
If you still want to do it for some reason, first create an invisible block (with
{show=0}
) defining with a \newcommand
definition:
$$
\newcommand{\foo}[0]{bar}
$${show=0}
which renders as:
We make it invisible because this block only contains KaTeX definitions, and should not render to anything.
Analogously with
\def
, definition:
$$
\gdef\foogdef{bar}
$${show=0}
which renders as:
and the second block using it:
$$
\foogdef
$$
which renders as:
And just to test that
{show=1}
actually shows, although it is useless, and that {show=0}
skips incrementing the equation count:
$$1 + 1$${show=1}
$$2 + 2$${show=0}
$$3 + 3$${show=1}
which renders as:
Shows both the OurBigBook code and its rendered output, e.g.:
\OurBigBookExample[[
Some `ineline` code.
]]
which renders as:
Some `ineline` code.
which renders as:Someineline
code.
Its input should be thought of as a literal code string, and it then injects the rendered output in the document.
This macro is used extensively in the OurBigBook documentation.
OK, this is too common, so we opted for some insanity here: double newline is a paragraph!
Paragraph 1.
Paragraph 2.
which renders as:
Paragraph 1.Paragraph 2.
Equivalently however, you can use an explicit
\P
macros as well, which is required for example to add properties to a paragraph, e.g.:
\P{id=paragraph-1}[Paragraph 1]
\P{id=paragraph-2}[Paragraph 2]
which renders as:
Paragraph 1Paragraph 2
Paragraphs are created automatically inside macro argument whenever a double newline appears.
Note that OurBigBook paragraphs render in HTML as
which renders as a single paragraph.
div
with class="p"
and not as p
. This means that you can add basically anything inside them, e.g. a list:
My favorite list is:
\Ul[
\li[aa]
\li[bb]
]
because it is simple.
One major advantage of this, is that when writing documentation, you often want to keep lists or code blocks inside a given paragraph, so that it is easy to reference the entire paragraph with an ID. Think for example of paragraphs in the C++ standard.
Dumps its contents directly into the rendered output.
This construct is not XSS safe, see: Section 10.2. "unsafeXss . "
Here for example we define a paragraph in raw HTML:
\passthrough[[
<p>Hello <b>raw</b> HTML!</p>
]]
which renders as:
Hello raw HTML!
And for an inline passthrough:
Hello \passthrough[[<b>raw</b>]] world!
which renders as:
Hellorawworld!
And so he said:
\Q[
Something very smart
And with multiple paragraphs.
]
and it was great.
which renders as:
And so he said:Something very smartAnd with multiple paragraphs.and it was great.
And so he said:
> Something very smart
And with multiple paragraphs.
and it was great.
which renders as:
And so he said:Something very smartAnd with multiple paragraphs.and it was great.
Example with explicit macro:
See the: <quote Hamlet what we are>.
\Q[We know what we are, but not what we may be.]
{title=Hamlet what we are}
{description=This quote refers to human's inability to know their own potential, despite understanding their current abilities.}
which renders as:
See the: Quote 1. "Hamlet what we are".We know what we are, but not what we may be.
Example with implicit syntax:
See the: <quote Hamlet what we are implicit>.
> We know what we are, but not what we may be.
{title=Hamlet what we are implicit}
{description=This quote refers to human's inability to know their own potential, despite understanding their current abilities.}
which renders as:
See the: Quote 2. "Hamlet what we are implicit".We know what we are, but not what we may be.
Example with explicit macro:
See the: <quote Julius Caesar star>.
\Q[The fault, dear Brutus, is not in our stars, but in ourselves.]
{title=Julius Caesar star}
which renders as:
See the: Quote 3. "Julius Caesar star".The fault, dear Brutus, is not in our stars, but in ourselves.
Example with implicit syntax:
See the: <quote Julius Caesar star implicit>.
> The fault, dear Brutus, is not in our stars, but in ourselves.
{title=Julius Caesar star implicit}
which renders as:
See the: Quote 4. "Julius Caesar star implicit".The fault, dear Brutus, is not in our stars, but in ourselves.
The insane syntax marks:For example:
Empty cells are allowed without the trailing space however:
- headers with
||
(pipe, pipe space) at the start of a line - regular cells with
|
(pipe, space) at the start of a line - separates rows with double newline
|| Header 1
|| Header 2
| 1 1
| 1 2
| 2 1
| 2 2
which renders as:
Header 1 Header 2 1 1 1 2 2 1 2 2
| 1 1
|
| 1 3
| 2 1
|
| 2 3
which renders as:
1 1 1 3 2 1 2 3
Equivalent fully explicit version:
Any white space indentation inside an explicit
\Table[
\Tr[
\Th[Header 1]
\Th[Header 2]
]
\Tr[
\Td[1 1]
\Td[1 2]
]
\Tr[
\Td[2 1]
\Td[2 2]
]
]
which renders as:
Header 1 Header 2 1 1 1 2 2 1 2 2
\Tr
can make the code more readable, and is automatically removed from final output due to remove_whitespace_children
which is set for \Table
.To pass further arguments to an implicit table such as
We would like to remove that explicit toplevel requirement as per: github.com/ourbigbook/ourbigbook/issues/186 The rules of when the caption shows up or not similar to those of images as mentioned at Section 4.2.8.1.1. "Image caption".
title
or id
, you need to use an explicit table
macro as in: Table 3. "My table title".
\Table
{title=My table title}
{id=table-my-table}
[
|| Header 1
|| Header 2
| 1 1
| 1 2
| 2 1
| 2 2
]
which renders as:
Header 1 Header 2 1 1 1 2 2 1 2 2
Multiple source lines, including paragraphs, can be added to a single cell with insane syntax by indenting the cell with exactly two spaces just as for lists, e.g.:
Arbitrarily complex nested constructs may be used, e.g. a table inside a list inside table:
|| h1
|| h2
|| h3
h3 2
| 11
| 12
12 2
| 13
| 21
| 22
| 23
which renders as:
h1 h2 h3h3 211 1212 213 21 22 23
| 00
| 01
* l1
* l2
| 20
| 21
| 30
| 31
| 10
| 11
which renders as:
00 01
- l1
l2
20 21 30 31 10 11
And now a table outside of
\OurBigBookExample
to test how it looks directly under the \Toplevel
implicit macro:Header 1 | Header 2 |
---|---|
1 1 | 1 2 |
2 1 | 2 2 |
And a fully insane one:
Header 1 | Header 2 |
---|---|
1 1 | 1 2 |
2 1 | 2 2 |
And now a larger one to see how the style is looking:
Header 1 | Header 2 | Header 3 | Header 4 |
---|---|---|---|
1 1 | 1 2 | 1 3 | 1 4 |
2 1 | 2 2 | 2 3 | 2 4 |
3 1 | 3 2 | 3 3 | 3 4 |
4 1 | 4 2 | 4 3 | 4 4 |
5 1 | 5 2 | 5 3 | 5 4 |
6 1 | 6 2 | 6 3 | 6 4 |
7 1 | 7 2 | 7 3 | 7 4 |
8 1 | 8 2 | 8 3 | 8 4 |
JavaScript interactive on-click table sorting is enabled by default, try it out by clicking on the header row:
Powered by: github.com/tristen/tablesort
|| String col
|| Integer col
|| Float col
| ab
| 2
| 10.1
| a
| 10
| 10.2
| c
| 2
| 3.4
which renders as:
String col Integer col Float col ab 2 10.1 a 10 10.2 c 2 3.4
Very analogous to images, only differences will be documented here.
In the case of videos, where to store images becomes even more critical since videos are even larger than images, such that the following storage approaches are impractical off the bat:As a result, then Wikimedia Commons is one of the best options much like for images:
We also handle more complex transcoded video URLs just fine:
Commons is better than YouTube if your content is on-topic there because:
\Video[https://upload.wikimedia.org/wikipedia/commons/8/85/Vacuum_pump_filter_cut_and_place_in_eppendorf.webm]
{id=sample-video-in-wikimedia-commons}
{title=Nice sample video stored in Wikimedia Commons}
{start=5}
which renders as:
\Video[https://upload.wikimedia.org/wikipedia/commons/transcoded/1/19/Scientific_Industries_Inc_Vortex-Genie_2_running.ogv/Scientific_Industries_Inc_Vortex-Genie_2_running.ogv.480p.vp9.webm]
{id=sample-video-in-wikimedia-commons-transcoded}
{title=Nice sample video stored in Wikimedia Commons transcoded}
which renders as:
- they have no ads
- it allows download of the videos: www.quora.com/Can-I-download-Creative-Commons-licensed-YouTube-videos-to-edit-them-and-use-them.
- it makes it easier for other users to find and re-use your videos
If your video does not fit the above Wikimedia Commons requirements, YouTube could be a good bet. OurBigBook automatically detects YouTube URLs for you, so the following should just work:
The
Alternatively, you can reach the same result in a more explicit and minimal way by setting
When the
Remember that you can also enable the
\Video[https://youtube.com/watch?v=YeFzeNAHEhU&t=38]
{id=sample-video-from-youtube-implicit-youtube}
{title=Nice sample video embedded from YouTube implicit from `youtube.com` URL}
which renders as:
youtu.be
domain hack URLs also work;
\Video[https://youtu.be/YeFzeNAHEhU?t=38]
{id=sample-video-from-youtube-implicit-youtu-be}
{title=Nice sample video embedded from YouTube implicit from `youtu.be` URL}
which renders as:
{provider=youtube}
and the start
arguments:
\Video[YeFzeNAHEhU]{provider=youtube}
{id=sample-video-from-youtube-explicit}
{title=Nice sample video embedded from YouTube with explicit `youtube` argument}
{start=38}
which renders as:
youtube
provider is selected, the Video address should only to contain the YouTube video ID, which shows in the YouTube URL for the video as:
https://www.youtube.com/watch?v=<video-id>
youtube
provider by default on your ourbigbook.json
with:
"media-provider" {
"youtube": {"default-for": "video"}
}
But you can also use raw video files from any location that can serve them of course, e.g. here is one stored in this repository: Video 11. "Nice sample video stored in this repository".
\Video[Tank_man_side_hopping_in_front_of_some_tanks.mp4]
{id=sample-video-in-repository}
{title=Nice sample video stored in this repository}
{source=https://www.youtube.com/watch?v=YeFzeNAHEhU}
{start=3}
which renders as:
And as for images, setting
titleFromSrc
automatically calculates a title for you:
\Video[Tank_man_side_hopping_in_front_of_some_tanks.mp4]
{titleFromSrc}
{source=https://www.youtube.com/watch?v=YeFzeNAHEhU}
which renders as:
Unlike image lazy loading, we don't support video lazy loading yet because:
- non-
youtube
videos use thevideo
tag which has noloading
property yet youtube
videos are embedded withiframe
andiframe
has noloading
property yet
Both of this cases could be worked around with JavaScript:
- non-
youtube
: setsrc
from JavaScript as shown for images: stackoverflow.com/questions/2321907/how-do-you-make-images-load-lazily-only-when-they-are-in-the-viewport/57389607#57389607.But this breaks page semantics however, we don't know how to work around that youtube
videos: same as above for theiframe
, but this should be less problematic since YouTube videos are not viewable without JavaScript anyways, and who cares aboutiframe
semantics?
The time to start playing the video at in seconds. Works for both
youtube
and non-YouTube videos.Every macro in OurBigBook can have an optional
id
and many also have a reserved title
property.When a macro in the document has a
title
argument but no id
argument given, get an auto-generated ID from the title: automatic ID from title.Usually, the most convenient way to write cross references is with the insane syntax with delimited angled braces:
More details at: insane cross reference.
<Cross references> are awesome.
which renders as:
Cross references are awesome.
The sane equivalent to this is:
Note how that is more verbose, especially because here we use both the
\x[cross-reference]{c}{p} are awesome section.
which renders as:
Cross references are awesome section.
\x
c
argument and \x
p
argument to capitalize and pluraize as desired.Another sane equivalent would be to add an explicit link body as in:
\x[cross-reference][Cross references] are awesome.
which renders as:
Cross references are awesome.
When you use an insane cross reference (
it gets expanded exactly to the sane equivalent:
so we see that the
<>
) such as in:
<Cross references> are awesome.
which renders as:
Cross references are awesome.
\x[Cross references]{magic} are alwasome
\x
magic
argument gets added. It is that argument that for example adds the missing -
, and removes the pluralization to find the correct ID cross-reference
. For more details, see the documentation of the \x
magic
argument.Like other insane constructs, insane cross references are exactly equivalent to the sane version, so you can just add other arguments after the construct, e.g.:
which gets converted to exact the same as the sane:
<Cross references>{full} are awesome.
which renders as:
Section 4.2.20. "Cross reference are awesome. "
\x[cross-reference]{full} are awesome.
which renders as:
Section 4.2.20. "Cross reference are awesome. "
In most cases it is generally more convenient to simply use the
\x
magic
argument through insane cross references instead of the c
and p
arguments as described on the rest of this section, see also: Section 4.2.20.3. "Inflection vs magic".A common usage pattern is that we want to use header titles in non-full cross references as the definition of a concept without repeating the title, for example:
== Dog
Cute animal.
\x[cats][Cats] are its natural enemies.
== Cats
This is the natural enemy of a \x[dog][dog].
\x[dog][Dogs] are cute, but they are still the enemy.
One example of a cat is \x[felix-the-cat].
=== Felix the Cat
Felix is not really a \x[cats][cat], just a carton character.
However, word inflection makes it much harder to avoid retyping the definition again.
For example, in the previous example, without any further intelligent behaviour we would be forced to re-type
\x[dog][dog]
instead of the desired \x[dog]
.OurBigBook can take care of some inflection cases for you.
For capitalization, both headers and cross reference macros have the Capitalization is handled by a JavaScript case conversion.
c
boolean argument which stands for "capitalized":- for headers,
c
means that the header title has fixed capitalization as given in the title, i.e.- if the title has a capital first character, it will always show as a capital, as is the case for most proper noun
- if it is lower case, it will also always remain lower case, as is the case for some rare proper nouns, notably the name of certain companies
This means that for such headers,c
in thex
has no effect. Maybe we should give an error in that case. But lazy now, send PR. - for cross reference macros,
c
means that the first letter of the title should be capitalized.Using this option is required when you are starting a sentence with a non-proper noun.
For pluralization, cross reference macros have the If your desired pluralization is any more complex than modifying the last word of the title, you must do it manually however.
p
boolean argument which stands for "pluralize":- if given and true, this automatically pluralizes the last word of the target title by using the blakeembrey/pluralize library.
- if given and false, automatically singularize
- if not given, don't change the number of elements
With those rules in mind, the previous OurBigBook example can be written with less repetition as:
== Dog
Cute animal.
\x[cats]{c} are its natural enemies.
== Cats
This is the natural enemy of a \x[dog].
\x[dog]{p} are cute, but they are still the enemy.
One example of a cat is \x[Felix the Cat].
=== Felix the Cat
{c}
Felix is not really a \x[cats][cat], just a carton character.
If plural and capitalization don't handle your common desired inflections, you can also just create custom ones with the
\H
synonym
argument.Now for a live example for quick and dirty interactive testing.
\x[inflection-example-not-proper]{full}
which renders as:
This is the default automatic pluralization/singularization library used by OurBigBook cross reference title inflection.
That library handles most cases well, but note that English language perfection is never possible with it as it would likely require having word databases which the authors do not wish to support, e.g. to deal with uncountable nouns such as "mathematics" correctly: github.com/plurals/pluralize/issues/60#issuecomment-310740594
The
\x
magic
argument was introduced later, and basically became a better alternative to cross reference title inflection in all but the following cases:\H
disambiguate
argument: disambiguate prevents the determination of plural inflection, e.g. in:there is currently no way to make it output= Python {disambiguate=animal} I like <python animal>.
Pythons
in the plural without resorting to either\x
p
argument or an explicit content, because if you wrote:it would just lead to Id not found, as we would try the plural vs singular onI like <pythons animal>.
animal
only.Maybe one day we can implement an even insaner system that understands that parenthesis should skipped for the inflection as in:github.com/ourbigbook/ourbigbook/issues/244I like <pythons (animal)>.
- plural headers. We only attempt to singularize arguments for now, not pluralize them. So if you had:
you would instead need to write:
My <dog> is nice. == Dogs
or:My <dog>{p=0} is nice.
My <dog>[dog] is nice.
If you use
\x
within a title
, which most commonly happens for image titles, that can generate complex dependencies between IDs, which would either be harder to implement, or lead to infinite recursion.To prevent such problems, OurBigBook emits an error if you use an
\x
without content in the title
of one of the following elements:- any header. For example, the following gives an error:
= h1 {id=myh1} == \x[myh1]
This could be solved by either adding a content to the reference:or by adding an explicit ID to the header:= h1 {id=myh1} == \x[myh1][mycontent]
= h1 {id=myh1} == \x[myh1] {id=myh2}
- non-header (e.g. an image) that links to the title of another non-headerFor non-headers, things are a bit more relaxed, and we can link to headers, e.g.:This is allowed because OurBigBook calculates IDs in two stages: first for all headers, and only later non non-headers.
= h1 \Image[myimg.jpg] {title=my \x[h1]}
What you cannot do is link to another image e.g.:and there the workaround are much the same as for headers: either explicitly set the cross reference content:\Image[myimg.jpg] {id=myimage1} {title=My image 1} \Image[myimg.jpg] {title=my \x[h1]}
or explicitly set an ID:\Image[myimg.jpg] {id=myimage1} {title=My image 1} \Image[myimg.jpg] {title=my \x[h1][My image 1]}
TODO both workaround are currently broken Image title with x to image with content incorrectly disallowed, we forgot to add a test earlier on, and things inevitably broke... Should not be hard to fix though, we are just overchecking.\Image[myimg.jpg] {id=myimage1} {title=My image 1} \Image[myimg.jpg] {id=myimage2} {title=my \x[h1]}
While it is technically possible relax the above limitations and give an error only in case of loops, it would require a bit of extra work which we don't want to put in right now: github.com/ourbigbook/ourbigbook/issues/95.
Furthermore, the above rules do not exclude infinite rendering loops, but OurBigBook detects such loops and gives a nice error message, this has been fixed at: github.com/ourbigbook/ourbigbook/issues/34
For example this would contain an infinite loop:
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2]}
\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1]}
This infinite recursion is fundamentally not technically solved: the user has to manually break the loop by providing an
or:
x
content explicitly, e.g. in either:
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2][my content 2]}
\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1]}
\Image[myimg.jpg]
{id=myimage1}
{title=\x[myimage2]}
\Image[myimg.jpg]
{id=myimage2}
{title=\x[myimage1][my content 1]}
A closely related limitation is the simplistic approach to
\x
id
output format.Reference to a non-first header of another file:
\x[h2-in-not-the-readme]
which renders as:
To make toplevel links cleaner, if the target header is the very first element of the other page, then the link does not get a fragment, e.g.:
and not:
while
\x[not-readme]
rendered as:
<a href="not-readme"
<a href="not-readme#not-readme"
\x[h2-in-not-the-readme]
is rendered with the fragment:
<a href="not-readme#h2-in-not-the-readme"
Reference to the first header of another file that is a second inclusion:
\x[included-by-not-readme]
which renders as:
Reference to another header of another file, with
full
:
\x[h2-in-not-the-readme]{full}.
which renders as:
Note that when full
is used with references in another file in multi page mode, the number is not rendered as explained at: Section 4.2.20.6.4.1. "\x
full
argument in cross file references".Reference to an image in another file:
\x[image-not-readme-xi]{full}.
which renders as:
Reference to an image in another file:
\x[image-figure-in-not-the-readme-without-explicit-id]{full}.
which renders as:
Remember that the ID of the toplevel header is automatically derived from its file name, that's why we have to use:
\x[not-readme]
which renders as:
instead of:
\x[not-the-readme]
Reference to a subdirectory:
\x[subdir]
\x[subdir/h2]
\x[subdir/notindex]
\x[subdir/notindex-h2]
which renders as:
Implemented at: github.com/ourbigbook/ourbigbook/issues/116Reference to an internal header of another file: h2 in not the README. By default, That header ID gets prefixed by the ID of the top header.
When using
rather than:
This is why IDs must be unique for elements across all pages.
--embed-includes
mode, the cross file references end up pointing to an ID inside the current HTML element, e.g.:
<a href="#not-readme">
<a href="not-readme.html/#not-readme">
When running in Node.js, OurBigBook dumps the IDs of all processed files to a
out/db.sqlite3
file in the out
directory, and then reads from that file when IDs are needed.When converting under a directory that contains
ourbigbook.json
, out/db.sqlite3
is placed inside the same directory as the ourbigbook.json
file.If there is no
ourbigbook.json
in parent directories, then out/db.sqlite3
is placed in the current working directory.These follows the principles described at: the current working directory does not matter when there is a
ourbigbook.json
.db.sqlite3
is not created or used when handling input from stdin.When running in the browser, the same JavaScript API will send queries to the server instead of a local SQLite database.
To inspect the ID database to debug it, you can use:
sqlite3 out/db.sqlite3 .dump
It is often useful to dump a single table, e.g. to dump the
and one particularly important query is to dump a list of all known IDs:
ids
table:
sqlite3 out/db.sqlite3 '.dump ids'
sqlite3 out/db.sqlite3 'select id from ids'
You can force
ourbigbook
to not use the ID database with the --no-db
command line optionThis section describes the philosophy of internal cross references.
In many static website generators, you just link to URL specific paths of internal headers.
In OurBigBook, internal cross references point to IDs, not paths.
For example, suppose "Superconductivity" is a descendant of "Condensed Matter Physics", and that the source for both is located at
condensed-matter-physics.bigb
, so that both appear on the same .html page condensed-matter-physics.html
.When linking to Superconductivity from an external page such as
statistical-physics.bigb
you write just <superconductivity>
and NOT <condensed-matter-physics#superconductivity>
. OurBigBook then automatically trakcs where superconductivity is located and produces href="condensed-matter-physics#superconductivity"
for you.This is important because on a static website, the location of headers might change. E.g. if you start writing a lot about superconductivity you would eventually want to split it to its own page,
superconductivity.html
otherwise page loads for condensed-matter-physics.html
would become too slow as that file would become too large.But if your links read
<condensed-matter-physics#superconductivity>
, and all links would break when you move things around.So instead, you simply link to the ID
<superconductivity>
, and ourbigbook renders links correctly for you wherever the output lands.When moving headers to separate pages, it is true that existing links to subheaders will break, but that simply cannot be helped. Large pages must be split into smaller ones. The issue can be mitigated in the following ways:
-S
,--split-headers
, which readers will eventually understand are better permalinks- JavaScript redirect to split on missing ID, which automatically redirect
condensed-matter-physics#superconductivity
tosuperconductivity
, potentially hitting a split header if the current page does not contain the HTML IDsuperconductivity
.
For OurBigBook Web, this is even more important, as we have dynamic article trees, so every header can appear on top.
If you really want to to use scopes, e.g. enforce the ID of "superconductivity" to be "condensed-matter-physics/superconductivity", then you can use the scope feature. However, this particular case would likely be a bad use case for that feature. You want your IDs to be as short as possible, which causes less need for refactoring, and makes topics on OurBigBook Web more likely to have matches from other users.
If the target
title
argument contains a link from either another cross references or a regular external hyperlink, OurBigBook automatically prevents that link from rendering as a link when no explicit body is given.This is done because nested links are illegal in HTML, and the result would be confusing.
This use case is most common when dealing with media such as images. For example in:
the
and not:
= afds
\x[image-aa-zxcv-lolol-bb]
== qwer
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=aa \x[zxcv][zxcv] \a[http://example.com][lolol] bb}
== zxcv
\x[image-aa-zxcv-lolol-bb]
renders something like:
<a href="#image-aa-zxcv-lolol-bb">aa zxcv lolol bb</a>
<a href="#image-aa-zxcv-lolol-bb">aa <a href="zxcv">zxcv</a> <a href="http://example.com">lolol</a> bb</a>
Live example:
This is a nice image: \x[image-aa-zxcv-lolol-bb].
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=aa \x[cross-reference-title-link-removal][zxcv] \a[http://example.com][lolol] bb}
which renders as:
This is a nice image: Figure 31. "aa zxcv lolol bb".
Capitalizes the first letter of the target title.
For more details, see: Section 4.2.20.2. "Cross reference title inflection".
Setting the
makes that header show up on the list of extra parents of the child.
child
boolean argument on a cross reference to a header as in:
\x[my-header]{child}
This argument is deprecated in favor of the
\H
tag
argument.This allows a section to have multiple parents, e.g. to include it into multiple categories. For example:
would render something like:
so note how "Bat" has a list of tags including "Flying animal", but Cat does not, due to the
= Animal
== Mammal
=== Bat
=== Cat
== Flying animal
These animals fly:
* \x[bat]{child}
These animals don't fly:
* \x[cat]
= Animal
== Mammal
=== Bat (Parent section: Mammal)
(Tags: Flying animal)
=== Cat (Parent section: Mammal)
== Flying animal (Parent section: Animal)
These animals fly:
* \x[bat]
These animals don't fly:
* \x[cat]
child
.This property does not affect how the table of contents is rendered. We could insert elements sections there multiple times, but it has the downside that browser Ctrl + F searches would hit the same thing multiple times on the table of contents, which might make finding things harder.
== My title{id=my-id}
Read this \x[my-id][amazing section].
If the second argument, the
is the same as:
content
argument, is not present, it expand to the header title, e.g.:
== My title{id=my-id}
Read this \x[my-id].
== My title{id=my-id}
Read this \x[my-id][My title].
The term refers to sections that have a parent/child relationship via either of the:rather than via the usual header hierarchy.
Secondary children show up for example on the tagged metadata section, but not on the table of contents, which is what the header hierarchy already shows.
Secondary children are normally basically used as "tags": a header such as
Bat
can be a direct child of Mammal
, and a secondary child of Flying animal
, or vice versa. Both Mammal
and Flying animal
are then basically ancestors. But we have to chose one main ancestor as "the parent", and other secondary ancestors will be seen as tags.This option first does ID target from title conversion on the argument, so you can e.g. keep any spaces or use capitalization in the title as in:
TODO the fact that this transformation is done currently makes it impossible to use "non-standard IDs" that contain spaces or uppercase letters. If someone ever wants that, we could maybe add a separate argument that does not do the expansion e.g.:
but definitely the most important use case is having easier to type and read source with the standard IDs.
= Animal
== Flying animal
{child=Big bat}
== Big bat
= Animal
== Flying animal
{childId=Big bat}
== Big bat
{id=Big bat}
Allows to link to headers with the
\H
file
argument, e.g.:
= My header
Check out this amazing file: <path/to/myfile.txt>{file}
== path/to/myfile.txt
\x[file_demo/file_demo_subdir/hello_world.js]{file}
which renders as:
\x[Tank_man_standing_in_front_of_some_tanks.jpg]{file}
which renders as:
\x[https://www.youtube.com/watch?v=YeFzeNAHEhU]{file}
which renders as:
To also show the section auto-generated number as in "Section X.Y My title" we add the optional
{full}
boolean argument to the cross reference, for example:
\x[x-full-argument]{full}.
which renders as:
{full}
is not needed for cross references to most macros besides headers, which use full
by default as seen by the default_x_style_full
macro property in --help-macros
. This is for example the case for images. You can force this to be disabled with {full=0}
:
Compare \x[image-my-test-image]{full=0} vs \x[image-my-test-image]{full=1}.
which renders as:
For example in the following cross file reference:
instead of:
This is because the number "Section 1.2" might already have been used in the current page, leading to confusion.
\x[h2-in-not-the-readme]{full}.
which renders as:
we get just something like:
Section "h2 in not the readme"
Section 1.2 "h2 in not the readme"
This argument makes writing many internal links more convenient, and it was notably introduced because it serves as the sane version of insane cross references.
If given e.g. as in:
the link treated magically as follows:
= Internal reference
\x[Internal references]{magic}
- content capitalization and pluralization are detected from the string, and implicitly set the
\x
c
argument and\x
p
argument. In the example:In this simple example, the content therefore will be exactly{c}
capitalization is set becauseInternal references
starts with an upper case characterI
{p}
pluralization is set becauseInternal references
ends in a plural word
Internal references
as in the source. But note that this does not necessarily need to be the case, e.g. if we had done:then the content would be:\x[Internal Reference]{magic}
without capitalInternal reference
R
, i.e. everything except capitalization and pluralization is ignored. This forgiving way of doing things means that writers don't need to remember the exact ideal capitalization of everything, which is very hard to remember.It also means that any more complex elements will be automatically rendered as usual, e.g. if we had:then the output would still contain the= \i[Internal] reference \x[internal reference]{magic}
<i>
italic tag.If we had a scope as in\x[my scope/Internal references]
, then each scope part is checked separately. E.g. in this case we would have upper caseInternal references
, even thoughmy scope
is lowercase, and so{c}
would be set. - the ID is calculated as follows:
- automatic ID from title conversion is performed, with you exception: forwards slashs
/
are kept, in order to make scopes work.In our case, there aren't any slashes/
, so it just givesinternal-references
. But if instead we had e.g.:\x[my scope/internal reference]{magic}
, then we would reachmy-scope/internal-reference
and notmy-scope-internal-reference
. - if there is a match to an existing ID use it.
internal-references
in the plural does not match, so go to the next step - if the above failed, try singularizing the last word as in the
\x
p
argument withp=0
before doing automatic ID from title conversion. This givesinternal-reference
, which does exist, and so we use that.
There may be some cases where you might still want to use cross reference title inflection however, see: Section 4.2.20.3. "Inflection vs magic".
A magic link can be created more succinctly by surrounding the link with "angle brackets" (
is equivalent to:
<>
), e.g.:
<Partial derivative>
\x[Partial derivative]{magic}
The
parent
argument is exactly like the \x
child
argument, but it reverses the direction of the parent/child relation.This argument is deprecated in favor of the
\H
tag
argument.The
renders something like:
ref
argument of \x
marks the link as reference, e.g.:
Trump said this and that.\x[donald-trump-access-hollywood-tape]{ref}
= Donald Trump Access Hollywood tape
Trump said this and that.<a href="donald-trump-access-hollywood-tape">*</a>
This could currently be replicated without
but later on we might add more precise reference fields like the page of a book or date fetched as Wikipedia supports.
ref
by just using:
Trump said this and that.\x[donald-trump-access-hollywood-tape][*]
Implemented at: github.com/ourbigbook/ourbigbook/issues/137
If true, then the target of a this link is called a "topic link" and gets treated specially, pointing to an external OurBigBook Web topic rather than a header defined in the current project.
For example, when rendering a static website, a link such as:
would produce output similar to:
e.g.:
\x[Albert Einstein]{topic}
\a[https://ourbigbook.com/go/topic/john-smith][John Smith]
\x[Albert Einstein]{topic}
which renders as:
This allows static website creators to easily link to topics they might not have already written about which others may have covered.
The OurBigBook Web instance linked to can be configured with
host
.Those links also work on OurBigBook Web rendering of course, and point to the current Web instance.
If an insane magic link starts with a hash sign (
#
), then it is converted to a topic link instead of a magic link.For example:
<#Albert Einstein>
which renders as:
is equivalent to:
\x[Albert Einstein]{topic}
If an insane topic link is made up of a single word then it can be written in the following even succincter notation, without the need for angle brackets:
is equivalent to:
I like #dogs
which renders as:
I like dogs
I like <#dogs>
Word separation is defined analogously to Insane link parsing rules, i.e.:
#
can start from anywhere, including the middle of words, e.g.:abc#mytopic
which renders as:would produce a link immediately preceded by the charactersabcmytopicabc
.#
ends at any insane link termination character, e.g.:
Unlike local links, it is not possible to automatically determine the exact pluralization of a topic link because:
- it would require communicating with the OurBigBook Web API, which we could in principle do, but we would rather not have static builds depend on Web instances
- topics can be written by multiple authors, and there could be both plural and singular versions of each topic ID, which makes it hard to determine which one is "correct"
Therefore, it is up to authors to specifically specify the desired pluralization of their topic links:
- by default, topic IDs are automatically singularized, e.g.:
renders something like:
<#Many Dogs>
\a[https://ourbigbook.com/go/topic/many-dog][Many Dogs]
- to prevent this automatic singularization, use
\x
p
argument with{p=1}
, e.g.:renders something like:<#Many Dogs>{p=1}
This is unfortunately always necessary for uncountable nouns such as "mathematics":\a[https://ourbigbook.com/go/topic/many-dogs][Many Dogs]
I like #mathematics{p=1}
which renders as:since our underlying pluralization library blakeembrey/pluralize cannot handle uncountable nouns reliably.I like mathematics
Pluralizes or singularizes the last word of the target title.
For more details, see: Section 4.2.20.2. "Cross reference title inflection".
Certain commonly used macros have insane macro shortcuts that do not start with backslash (
\
).Originally, Ciro wanted to avoid those, but they just feel too good to avoid.
Every insane syntax does however have an equivalent sane syntax.
The style recommendation is: use the insane version which is shorter, unless you have a specific reason to use the sane version.
Insane in our context does not mean worse. It just mean "harder for the computer to understand". But it is more important that humans can understand in the first place! It is find to make the computer work a bit more for us when we are able to.
- Section 4.2.15. "Paragraph : "
\n\n
(double newline) - Section 4.2.1. "Link : "
a http://example.com b
(space followed byhttp://
) - Section 4.2.20. "Cross reference : "
<Cross references>
(angle brackets), see: Section 4.2.20.1. "Insane cross reference " - Section 4.2.13. "Mathematics : "
$
, described at: insane code and math shortcuts - Section 4.2.4. "Code block : "
`
, described at: insane code and math shortcuts - Section 4.2.12. "List : "
*
and indentation - Section 4.2.18. "Table : "
||
,|
and indentation
The insane code and math shortcuts work very analogously and are therefore described together in this section.
The insane inline code syntax:
and is equivalent to the sane:
a `b c` d
which renders as:
ab c
d
a \c[[b c]] d
The insane block code:
and is equivalent to the sane:
a
``
b c
``
d
which renders as:
ab c
d
a
\C[[
b c
]]
d
Insane arguments always work by abbreviating:This means that you can add further arguments as usual.
- the macro name
- one or more of its positional arguments, which are fixed as either literal or non-literal for a given insane construct
For example, an insane code block with an id can be written as:
because that is the same as:
So we see that the
a `b c`{id=ef} g
a \c[b c]{id=ef} g
which renders as:
agb c
b c
argument is the very first argument of \c
.Extra arguments must come after the insane opening, e.g. the following does not work:
a {id=ef}`b c` g
This restriction things easy to parse for humans and machines alike.
Literal backticks and dollar signs can be produced witha backslash escape as in:
a \` \$ b
which renders as:
a ` $ b
It is not possible to escape backticks (
`
) inside an insane inline code, or dollar signs ($
) in insane math.The design reason for that is because multiple backticks produce block code.
The upside is that then you don't have to escape anything else, e.g. backslashes (
\
) are rendered literally.The only way to do it is to use the sane syntax instead:
a \c[[b ` c]] d
a \m[[\sqrt{\$4}]] d
which renders as:
ab ` c
da d
Within block code and math, you can just add more separators:
```
code with two backticks
``
nice
```
which renders as:
code with two backticks `` nice
OurBigBook Markup macro identifiers can consist of the following letters:Since underscores
a-z
lowercaseA-Z
uppercase0-9
_
or hyphens =
are not allowed, camel case macro names are recommended, e.g. for \OurBigBookExample
we use the name:
OurBigBookExample
Every argument in OurBigBook is either positional or named.
For example, in a header definition with an ID:
which is equivalent to the sane version:
we have:
= My asdf
{id=asdf qwer}
{scope}
\H[1][My asdf]
{id=asdf qwer}
{scope}
- two positional argument:
[1]
and[My asdf]
. Those are surrounded by square brackets[]
and have no name - two named arguments:
{id=asdf qwer}
and{scope}
.The first one has nameid
, followed by the separator=
, followed by the valueasdf qwer
.The separator=
always is optional. If not given, it is equivalent to an empty value, e.g.:is the same as:{id=}
{id}
You can determine if a macro is positional or named by using
and so we see that
--help-macros
. Its output contains something like:
"h": {
"name": "h",
"positional_args": [
{
"name": "level"
},
{
"name": "content"
}
],
"named_args": {
"id": {
"name": "id"
}
"scope": {
"name": "scope"
}
},
level
and the content
argument are positional arguments, and id
and scope
are named arguments.Generally, positional arguments are few (otherwise it would be hard to know which is which is which), and are almost always used for a given element so that they save us from typing the name too many times.
The order of positional arguments must of course be fixed, but named arguments can go anywhere. We can even mix positional and named arguments however we want, although this is not advised for clarity.
The following are therefore all equivalent:
\H[1][My asdf]{id=asdf qwer}{scope}
\H[1][My asdf]{scope}{id=asdf qwer}
\H{id=asdf qwer}{scope}[1][My asdf]
\H{scope}[1]{id=asdf qwer}[My asdf]
Just like named arguments, positional arguments are never mandatory.
Most positional arguments will default to an empty string if not given.
However, some positional arguments can have special effects if not given.
For example, an anchor with the first positional argument present (the URL), but not the second positional argument (the link text) as in:
\a[http://example.com]
which renders as:
has the special effect of generating automatic links as in:
\a[http://example.com][http://example.com]
This can be contrasted with named arguments, for which there is always a default value, notably for boolean arguments.
See also: Section 4.2.1. "Link . "
Some positional arguments are required, and if not given OurBigBook reports an error and does not render the node.
This is for example the
level
of a header.These arguments marked with the
mandatory: true
--help-macros
argument property.Name arguments marked in
--help-macros
as boolean: true
must either:- take no value and no
=
sign, in which case the value is implicitly set to1
- take value exactly
0
or1
- not be given, in which case a custom per-macro default is used. That value is the
default
from--help-macros
, or0
if such default is not given
For example, the
\x
full
argument of cross references is correctly written as:
\x[boolean-argument]{full}
which renders as:
without the =
sign, or equivalently:
\x[boolean-argument]{full=1}
which renders as:
The full=0
version is useful in the case of reference targets that unlike headers expand the title on the cross reference by default, e.g. images:
\x[boolean-argument]{full=1}
which renders as:
The name "boolean argument" is given by analogy to the "boolean attribute" concept in HTML5.
Positive nonzero integer arguments accept only the characters
[0-9]
as their input, and 0 may not be the first character. If anything else is present, an error is raised.Common arguments are argument names that are present in all macros.
Explicitly sets the ID of a macro.
In OurBigBook Markup, every single macro has an ID, which can be either:
- explicit: extracted from some input given by the user, either the
id
argument or thetitle
argument. Explicit IDs can be referenced in Internal cross references and must be unique - implicit: automatically generated numerical ID. Implicit IDs cannot be referenced in Internal cross references and don't need to be unique. Their primary application is generating on hover links next to everything you hover, e.g. arbitrary paragraphs.
The most common way to assign an ID is implicitly with automatic ID from title conversion for macros that have a
title
argument.The
id
argument allows to either override the automatic ID from title, or provide an explicit ID for elements that don't have a title
argument.Sometimes the short version of a name is ambiguous, and you need to add some extra text to make both its title and ID unique.
For example, the word "Python" could either refer to:
- the programming language: en.wikipedia.org/wiki/Python_(programming_language)
- the genus of snakes: en.wikipedia.org/wiki/Python_(genus)
The
disambiguate
named argument helps you deal more neatly with such problems.Have a look at this example:
from which we observe how
My favorite snakes are \x[python-genus]{p}!
My favorite programming language is \x[python-programming-language]!
\x[python-genus]{full}
\x[python-programming-language]{full}
= Python
{disambiguate=genus}
{parent=disambiguate-argument}
= Python
{c}
{disambiguate=programming language}
{parent=disambiguate-argument}
{title2=.py}
{wiki}
disambiguate
:- gets added to the ID after conversion following the same rules as automatic ID from title
- shows up on the header between parenthesis, much like Wikipedia, as well as in
full
cross references - does not show up on non-
full
references. This makes it much more likely that you will be able to reuse the title automatically on a cross reference without thecontent
argument: we wouldn't want to say "My favorite programming language is Python (programming language)" all the time, would we? - gets added to the default
\H
wiki
argument inside parenthesis, following Wikipedia convention, therefore increasing the likelihood that you will be able to go with the default Wikipedia value
Besides disambiguating headers, the
Note that unlike for headers,
disambiguate
argument has a second related application: disambiguating IDs of images. For example:
\x[image-the-title-of-my-disambiguate-image]{full=0}
\x[image-the-title-of-my-disambiguate-image-2]{full=0}
\x[image-the-title-of-my-disambiguate-image]{full}
\x[image-the-title-of-my-disambiguate-image-2]{full}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my disambiguate image}
\Image[Tank_man_standing_in_front_of_some_tanks.jpg]
{title=The title of my disambiguate image}
{disambiguate=2}
which renders as:
disambiguate
does not appear on the title of images at all. It serves only to create an unique ID that can be later referred to. Headers are actually the only case where disambiguate
shows up on the visible rendered output. We intend on making this application obsolete however with:This use case is even more useful when
title-from-src
is enable by default for the media-providers
entry, so you don't have to repeat titles several times over and over.The JavaScript interface sees arguments as follows:
where args is a dict such that:
function macro_name(args)
- optional arguments have the key/value pairs explicitly given on the call
- mandatory arguments have a key documented by the API, and the value on the call.For example, the link API names its arguments
href
andtext
.
Arguments that are opened with more than one square brackets
[
or curly braces {
are literal arguments.In literal arguments, OurBigBook is not parsed, and the entire argument is considered as text until a corresponding close with the same number of characters.
Therefore, you cannot have nested content, but it makes it extremely convenient to write code blocks or mathematics.
For example, a multiline code block with double open and double close square brackets inside can be enclosed in triple square brackets:
A literal argument looks like this in OurBigBook:
\C[[
\C[
A multiline
code block.
]
]]
And another paragraph.
which renders as:
A literal argument looks like this in OurBigBook:\C[ A multiline code block. ]
And another paragraph.
The same works for inline code:
The program \c[[puts("]");]] is very complex.
which renders as:
The programputs("]");
is very complex.
Within literal blocks, only one thing can be escaped with backslashes are:
- leading open square bracket
[
- trailing close square bracket
]
The rule is that:
- if the first character of a literal argument is a sequence of backslashes (
\
), and it is followed by another argument open character (e.g.[
, remove the first\
and treat the other characters as regular text - if the last character of a literal argument is a
\
, ignore it and treat the following closing character (e.g.]
) as regular text
See the following open input/output pairs:
and close examples:
\c[[\ b]]
<code>\ b</code>
\c[[\a b]]
<code>\a b</code>
\c[[\[ b]]
<code>[ b</code>
\c[[\\[ b]]
<code>\[ b</code>
\c[[\\\[ b]]
<code>\\[ b</code>
\c[[a \]]
<code>a \</code>
\c[[a \]]]
<code>a ]</code>
\c[[a \\]]]
<code>a \]</code>
If the very first or very last character of an argument is a newline, then that character is ignored if it would be part of a regular plaintext node.
For example:
generates something like:
instead of:
This is extremely convenient to improve the readability of code blocks and similar constructs.
\C[[
a
b
]]
<pre><code>a
b</code></pre>
<pre><code>
a
b
</code></pre>
The newline is however considered if it would be part of some insane macro shortcut. For example, we can start an insane list inside a quotations as in:
where the insane list requires a leading newline
\Q[
* a
* b
]
which renders as:
- a
- b
\n*
to work. That newline is not ignored, even though it comes immediately after the \Q[
opening.The macro name and the first argument, and any two consecutive arguments, can be optionally separated by exactly one newline character, e.g.:
is equivalent to:
which is also equivalent to:
This allows to greatly improve the readability of long argument lists by having them one per line.
\H
[2]
{scope}
[Design goals]
\H[2]{scope}[Design goals]
\H[2]{scope}
[Design goals]
There is one exception to this however: inside an insane header, any newline is interpreted as the end of the insane header. This is why the following works as expected:
and the
== My header 2 `some code`
{id=asdf}
id
gets assigned to the header rather than the trailing code element.If the document ends one newline, it is ignored.
If it is two or more, then that generates an error.
Every character that cannot be a macro identifier can be escaped with a backslash
\
. If you try to escape a macro identifier it of course treats the thing as a macro instead and fails, e.g. in \a
it would try to use a macro called \a
, not escape the character a
.For some characters, escaping or not does not make any difference because they don't have any meaning to OurBigBook Markup, e.g. currently
%
is always the exact same as \%
.But in non-literal macro arguments, you have to use a backslash to escape the following if you want them to not have any magical meaning:
\
: backslashes start macros\[
and\]
: open and close positional macro arguments\{
and\}
: open and close optional macro arguments- escapes for macros with insane shortcut:
<
(open angle brackets, less than sign): insane macro shortcut for insane cross references$
(dollar sign): insane macro shortcut for mathematics`
(backtick): insane macro shortcut for code blocks#
(hash): insane topic links
Furthermore, only at:you must also escape the following macros with insane shortcut:
- at the start of the document
- after a newline
- at the start of a new argument
The escape rules for literal arguments are described at: Section 4.3.3.5. "Literal arguments . "
This is good for short arguments of regular text, but for longer blocks like code blocks or mathematics, you may want to use literal arguments
Each macro argument can have certain properties associated to it.
These properties have programmatic effects, and allow users and developers to more easily understand and create new macro arguments.
Some macro arguments are disabled by default.
These are typicially arguments which felt like a good idea one day, but which we ended up regretting.
They can be enabled via
ourbigbook.json
options TODO, but doing so will make the project incompatible with OurBigBook Web, so it is not advised.A macro argument property that is
inlineOnly
can only contain inline macros. If any block macros present in the argument or its descendants, will lead to a conversion error.Some notable rules:
title
arguments are alwaywsinlineOnly
- all arguments of inline macros are
inlineOnly
There are two main rationales for enforcing these rules:
- the HTML
h1
-h6
header HTML elements can only contain phrasing content (analogout to our inline macros) for the HTML to be valid. We could chose to use styleddiv
s instead ofh
elements, but this could have a negative SEO impact. All other HTML elements could be replaced bydiv
s without issue however, the problem really is onlyh
. - on OurBigBook Web, where multiple users are working together and many titles from multiple users show on index pages, it is saner to be more restrictive on what is allowed on titles and to prevent visually very large things from being added in order to prevent bad actors or accidents from disrupting other users too much
In HTML, certain elements such as
<ul>
cannot have any text
nodes in them, and any whitespace is ignored, see stackoverflow.com/questions/2161337/can-we-use-any-other-tag-inside-ul-along-with-li/60885802#60885802.A similar concept applies to OurBigBook, e.g.:
does not parse as:
but rather as:
because the
\Ul[
\L[aa]
\L[bb]
]
\Ul[\L[aa]<NEWLINE>\L[bb]<NEWLINE>]
\Ul[\L[aa]\L[bb]]
content
argument of ul
is marked with remove_whitespace_children
and automatically removes any whitespace children (such as a newline) as a result.This also applies to consecutive sequences of
also does not include the newline between the list items.
auto_parent
macro property macros, e.g.:
\L[aa]
\L[bb]
The definition of whitespace is the same as the ASCII whitespace definition of HTML5:
\r\n\f\t
.By default, arguments can be given only once.
However, arguments with the
multiple
macro argument property set to true
can be given multiple times, and each time the argument is given, the new value is appended to a list containing all the values.An example is the
\H
tag
argument.Internally, multiple is implemented by creating a new level in the abstract syntax tree, and storing each argument separately under a newly generated dummy nodes as in:
AstNode: H
AstArgument: child
AstNode: Comment
AstArgument: content
AstNode: plaintext
AstNode: x
AstNode: Comment
AstArgument: content
AstNode: plaintext
AstNode: x
This section documents ways to classify macro arguments that are analogous to macro argument properties, but which don't yet have clear and uniform programmatic effects and so are a bit more hand wavy for now.
The
content
argument of macros contains the "main content" of the macro, i.e. the textual content that will show the most proeminently once the macro is rendered. It is usually, but not always, the first positional argument of macros. We should probably make it into an official macro argument property at some point.In most cases, it is quite obvious which argument is the
content
argument, e.g.:\i
macro: in\i[asdf qwer]
thenasdf qwer
is thecontent
argument\a
macro: in\a[https://example.com][example website]
thenexample website
is thecontent
argument
Some macros however don't have a
content
argument, especially when they don't show any textual acontent as their primary rendered output, e.g.:\Image
macro: this macro hastitle
byt not content, e.g. as in:\Image[flower.jpg]{title=}
, since the primary content is theImage
rather than any specific text
Philosophically, the
content
argument of a macro is analogous to the innerHTML
of an HTML tag, as opposed to attributes such as href=
and so on. The difference is that in OurBigBook Markup, every macro argument can contain child elements, while in HTML only the innerHTML
, but not the attributes, can.The
title
argument is an argument that gets used in automatic ID from title calculation of macro IDs.Examples:
- headers:
= My header
is equivalent to\H[1][My header]
, andMy header
is thetitle
argument, which is a positional argument in this case - images: in
\Image[flower.jpg]{title=My header}
andMy header
is thetitle
argument, which is a named argument in this case
The
description
argument is similar to the title
argument in that it adds information about some block such as an image or code block. The difference from the title is that it does not count toward automatic ID from title calculations.These are shared concepts that are used across other sections.
Some sequences of macros such as
parses exactly like:
l
from lists and tr
from tables automatically generate implicit parents, e.g.:
\Ul[
\L[aa]
\L[bb]
]
\L[aa]
\L[bb]
The children are always added as arguments of the
content
argument of the implicit parent.If present, the
auto_parent
macro property determines which auto-parent gets added to those macros.Every OurBigBook macro is either block or inline:
Some macros have both a block and an inline version, and like any other macro, those are differentiated by capitalization:
Certain common URL protocols are treated as "known" by OurBigBook, and when found they have special effects in some parts of the conversion.
The currently known protocols are:
http://
https://
Effects of known protocols include:
- insane link parsing rules: mark the start of insane links
- store images in a separate media repository: mark an image
src
to ignoreprovider
Some parts of OurBigBook use "JavaScript case conversion".
This means that the conversion is done as if by the
toLowerCase
/toUpperCase
functions.The most important fact about those functions is that they do convert non-ASCII Unicode capitalization, e.g. between
É
and é
:These conversions are also specified in the Unicode standard.
If the project toplevel directory of an OurBigBook project is also a git repository, and if
git
is installed, then the OurBigBook project is said to be a "Git tracked project".In general usages of a macro produces an element, and every element has an ID.
IDs must be unique, and they are used as the target of internal cross references.
E.g. due to Section 4.2.6.4.9.1.1. "Automatic ID from title", the elements:
would have IDs respectively:
= Animal
== Big dog
I like <big dogs>.
animal
big-dog
Such IDs are almost always rendered as HTML IDs as something like:
and can therefore be linked to in a page with the corresponding fragment:
<h1 id="animal">
<h2 id="big-dog">
animal.html#big-dog
IDs that start with an underscore
_
are reserved for OurBigBook usage, and will give an error if you try to use them, in order to prevent ID conflicts.For example:
- the table of contents uses an ID
_toc
the ID of the ToC is always fixed totoc
. If you try to use that for another element, you will get the following error: - elements without an explicit ID may receive automatically generated IDs of type
_1
,_2
and so on
If you use a reserved ID, you will get an error mesasge of type:
error: tmp.bigb:3:1: IDs that start with "_" are reserved: "_toc"
OurBigBook CLI is the executable program called
ourbigbook
which comes when you install npm install ourbigbook
. It is the main command line utility of the OurBigBook Project.Its functionality will also be exposed on GUI editor support such as Visual Studio Code to make things nicer for non-technical users.
The main functionalities of the executable are to:
- convert OurBigBook Markup files to HTML files or other formatsThe HTML files can then be either viewd from your filesystem on a browser, or uploaded and hosted very cheaply or for free so that others can see it, e.g. on GitHub Pages.
- publish your content, either to OurBigBook Web or as a static website
Or if you are a programmer: OurBigBook CLI is a Static Wiki generator that can be invoked from the command line with the
ourbigbook
executable.OurBigBook CLI is how cirosantilli.com is published.
OurBigBook Web takes as input the exact same format of OurBigBook Markup files used by OurBigBook CLI. TODO support/improve import/export to/from OurBigBook Web, see also:
-W
, --web
.The OurBigBook CLI calls the OurBigBook Library to convert each input file.
Convert a
.bigb
file to HTML and output the HTML to a file with the same basename without extension, e.g.:
ourbigbook hello.bigb
firefox out/html/hello.html
Files named
README.bigb
are automatically converted to index.html
so that they will show on both GitHub READMEs and at the website's base address:
ourbigbook README.bigb
firefox out/html/index.html
Convert all
The HTML output files are placed right next to each corresponding
.bigb
files in a directory to HTML files, e.g. somefile.bigb
to out/html/somefile.html
:
ourbigbook .
.bigb
.The output file can be selected explicitly with:
--outfile <outfie>
.Output to stdout instead of saving it to a file:
ourbigbook --stdout README.bigb
In order to resolve cross file references, this actually does two passes:
- first an ID extraction pass, which parses all inputs and dumps their IDs to the ID database
- then a second render pass, which uses the IDs in the ID database
Convert a
.bigb
file from stdin to HTML and output the contents of <body>
to stdout:
printf 'ab\ncd\n' | ourbigbook --body-only
Stdin converion is a bit different from conversion from a file in that it ignores the
ourbigbook.json
and any other setting files present in the current directory or its ancestors. Also, it does not produce any changes to the ID database. In other words, a conversion from stdin is always treated as if it were outside of any project, and therefore should always produce the same results regardless of the current working directory.Learn the syntax basics in 5 minutes: docs.ourbigbook.com/_obb/dist/editor.
First ensure that Node.js is installed in your computer. You should be able to successfully the following command successfully from a terminal:
node --version
Now let's play with an OurBigBook template locally:
That template can be seen rendered live at: cirosantilli.com/ourbigbook-generate-multifile/ Other templates are documented at:
git clone https://github.com/ourbigbook/template
cd template
npm install
npx ourbigbook .
firefox out/html/index.html
--generate
.To publish to GitHub Pages on your repository you can just fork the repository github.com/ourbigbook/template to your own github.com/johndoe/template and then:
and it should now be visible at: johndoe.github.io/template
git remote set-url origin git@github.com:johndoe/template.git
npx ourbigbook --publish
Then, every time you make a change you can publish the new version with:
or equivalently with the
git add .
git commit --message 'hacked stuff'
ourbigbook --publish .
-P, --publish-commit <commit-message>
shortcut:
ourbigbook --publish-commit 'hacked stuff'
If you want to publish to your root page johndoe.github.io instead of johndoe.github.io/template you need to rename the
master
branch to dev
as mentioned at publish to GitHub pages root page:
git remote set-url origin git@github.com:johndoe/johndoe.github.io.git
# Rename master to dev, and delete the old master.
git checkout -b dev
git push origin dev:dev
git branch -D master
git push --delete origin master
npx ourbigbook --publish
The following files of the template control the global style of the output, and you are free to edit them:
ourbigbook.liquid.html
: global HTML template in Liquid format. Available variables are documented at Section 5.5.25. "--template
"- Sass is just much more convenient to write than raw CSS.That file gets included into the global HTML template inside
ourbigbook.liquid.html
at:<link rel="stylesheet" href="{{ root_relpath }}main.css">
When you run:
it converts all files in the current directory separately, e.g.:
npx ourbigbook .
README.bigb
toout/html/index.html
, sinceREADME
is a magic name that we want to show on the root URLnot-readme.bigb
toout/html/not-readme.html
, as this one is a regular name unlikeREADME
main.scss
tomain.css
If one of the input files starts getting too large, usually the toplevel
Note however that when those individual files have a cross file reference to something defined in
to parse all files and extract all necessary IDs to the ID database. That would be optimized slightly with the
to only extract the IDs but not render, which speeds things up considerably
README.bigb
in which you dump everything by default like Ciro does, you can speed up development and just compile files individually with either:
npx ourbigbook README.bigb
npx ourbigbook not-readme.bigb
not-readme.bigb
, e.g. via \x[h2-in-not-the-readme]
, then you must have first previously done pass once with:
npx ourbigbook .
--no-render
command line option:
npx ourbigbook --no-render .
When dealing with large files, you might also be interested in the following amazing options:
To produce a single standalone output file that contains everything the viewer needs to correctly see the page do:
You can now just give the generated
npx ourbigbook --embed-resources --embed-includes README.bigb
out/html/index.html
to any reader and they should be able to view it offline without installing anything. The flags are:--embed-includes
: without this,\Include[not-readme]
shows as a link to the fileout/html/not-readme.html
which comes fromnot-readme.bigb
With the flag,not-readme.bigb
output gets embedded into the outputout/html/index.html
directly--embed-resources
: by default, we link to CSS and JavaScript that lives insidenode_modules
. With this flag, that CSS and JavaScript is copied inline into the document instead. One day we will try to handle images that way as well
Install the NPM package globally and use it from the command line for a quick conversion:
or to a file:
You almost never want to do this except when developing OurBigBook, as it won't be clear what version of
npm install -g ourbigbook
printf 'ab\ncd\n' | ourbigbook --body-only
printf 'ab\ncd\n' | ourbigbook > tmp.html
ourbigbook
the document should be compiled with. Just be a good infant and use OurBigBook with the template that contains a package.json
via npx
, OK?Furthermore, the default install of Chromium on Ubuntu 21.04 uses Snap and blocks access to dotfiles. For example, in a sane NVM install, our global CSS would live under One workaround is to use
/home/ciro/.nvm/versions/node/v14.17.0/lib/node_modules/ourbigbook/_obb/ourbigbook.css
, which gets blocked because of the .nvm
part:- forum.snapcraft.io/t/dot-files/7062
- bugs.launchpad.net/snapd/+bug/1607067
- superuser.com/questions/1546550/chromium-81-wont-display-dotfiles-anymore
- askubuntu.com/questions/1184357/why-cant-chromium-suddenly-access-any-partition-except-for-home
- askubuntu.com/questions/1214346/as-a-user-is-there-any-way-to-change-the-confinement-of-a-snap-package
--embed-resources
, but this of course generates larger outputs.To run master globally from source for development see: Section 12.2. "Run OurBigBook master". This one actually works despite the dotfile thing since your development path is normally outside of dotfiles.
Try out the JavaScript API with lib_hello.js:
npm install ourbigbook
./lib_hello.js
There are two ways to publish your OurBigBook content:A fundamental design choice of the OurBigBook Project is that, except for bugs, a single OurBigBook Markup source tree can be published in both of those ways without any changes.
- as a static website. This means that you generate HTML files from OurBigBook Markup files and then publish them either by:
- uploading to a static website server such as GitHub Pages by using the publish option
- converting with
--publish-target local
and sending a zip with the pages to someone to view locally
- to an OurBigBook Web instance such as OurBigBook.com. This can be done either by:
- editing on the OurBigBook Web editor directly in your browser
- uploading OurBigBook Markup files from your computer with the
-W
,--web
option
The trade-offs between the two options are highlighted at: OurBigBook Web vs static website publishing.
- static websites are cheaper to host, including many free options such as GitHub Pages.This means that you are likely to always have several free or cheap choices of where to upload your content to, making it essentially all but TEOTWAWKI-proofPages will also load slightly fatser.
- OurBigBook Web has killer multi-user features: OurBigBook Web topics, article upvotes and OurBigBook Web discussionsFurthermore, it also has some non multi-user features which cannot be feasibly implemented in a static website because they would require too much storage, on the fly generation is the only feasible way to deal with them:
- OurBigBook Web dynamic article tree
- article history. Unimplemented as of writing: github.com/ourbigbook/ourbigbook/issues/248
Its main downside is that it is more expensive to host.The OurBigBook Project will do its best to keep OurBigBook.com uploading as free as possible, but upload limits necessarily have to be more strict than those of static websites, as the underlying operating cost is larger.
The following basenames are considered "index files":
README.bigb
index.bigb
Those basenames have the following magic properties:
- the default output file name for an index file in HTML output is either:
index.html
when in the project toplevel directory. E.g.README.bigb
renders toindex.html
. Note that GitHub and many other static website hosts then automatically hide theindex.html
part from the URL, so that yourREADME.bigb
hosted athttp://example.com
will be accessible simply underhttp://example.com
and nothttp://example.com/index.html
- the name of the subdirectory in which it is located when not in the project toplevel directory. E.g.
mysubdir/index.bigb
outputs tomysubdir.html
Previously, we had placed the output inmysubdir/index.html
, but this is not as nice as it makes GitHub pages produce URLs with a trailing slash asmysubdir/
, which is ugly, see also: stackoverflow.com/questions/5948659/when-should-i-use-a-trailing-slash-in-my-url
- the default toplevel header ID of an index files is derived from the parent directory basename rather than from the source file basename
This directory is determined by first checking the presence of a
ourbigbook.json
file.If a
ourbigbook.json
is found, then the project toplevel directory is the directory that contains that file.- otherwise, if the input path is a descendant of the current working directory, then the current working directory is used, see also: the current working directory does not matter when there is a
ourbigbook.json
- otherwise, if the input path is a directory, it is used
- otherwise, the directory containing the input file is used
For example, consider the file following file structure relative to the current working directory:
path/to/notindex.bigb
In this case:
- if there is no
ourbigbook.json
file:- if we run
ourbigbook .
: the toplevel directory is the current directory.
, and sonotindex.bigb
has IDpath/to/notindex
- if we run
ourbigbook path
: same - if we run
ourbigbook path/to
: same - if we run
ourbigbook path/to/notindex.bigb
: same
- if we run
- if there is a
path/ourbigbook.json
file:- if we run
ourbigbook .
: the toplevel directory is the current directory.
because theourbigbook.json
is below the entry point and is not seen, and sonotindex.bigb
has IDpath/to/notindex
- if we run
ourbigbook path
: the toplevel directory is the directory with theourbigbook.json
,path
, and sonotindex.bigb
has IDto/notindex
- if we run
ourbigbook path/to
: same - if we run
ourbigbook path/to/notindex.bigb
: same
- if we run
This is the index file present in the project toplevel directory.
The "home article" is the first article of the toplevel index file. E.g. in:
README.bigb
then "John Smith's Homepage" is the home article, but "I like dogs" is not.
= John Smith's Homepage
== I like dogs
The home article has some special handling done to it, notably:
- it renders as "Home" in many places as such as the breadcrumb, a way to make things more unified and succinct, especially on web
- it automatically gets two IDs:For example we could write:
- a main empty ID
- a synonym to that empty ID
Here= John Smith's Homepage == I like dogs This is my homepage: <> And also: <John Smith's Homepage>
but both link to the same location<>
renders as "Home"<John Smith's Homepage>
renders asJohn Smith's Homepage
As a consequence of the ID being empty, you have to set\H
parent
argument of subsequent headers to empty if you witsh to use them as in:= John Smith's Homepage = I like dogs {parent=}
Doing:doesn't current work because the ID= John Smith's Homepage = I like dogs {parent=John Smith's Homepage}
john-smith-s-homepage
is just a synonym to the empty ID, and you can't currently setparent=
to point to synonyms:This is because the ID John Smith's HomepageYou can prevent a second ID from being giving by simply setting the ID to be explicitly empty:which would generte just a single empty ID.= John Smith's Homepage {id=}
When the file or directory being converted has an ancestor directory with a
then all of the following conversions produce the same output:
ourbigbook.json
file, then your current working directory does not have any effect on OurBigBook output. For example if we have:
/project/ourbigbook.json
/project/README.bigb
/project/subdir/README.bigb
- directory conversion:
cd /project && ourbigbook .
cd / && ourbigbook project
cd project/subdir && ourbigbook ..
- file conversion:
cd /project && ourbigbook README.bigb
cd / && ourbigbook project/README.bigb
cd project/subdir && ourbigbook ../README.bigb
When there isn't a
ourbigbook.json
, everything happens as though there were an empty ourbigbook.json
file in the current working directory. So for example:- outputs that would be placed relative to inputs are still placed in that place, e.g.
README.bigb -> index.html
always stay together - outputs that would be placed next to the
ourbigbook.json
are put in the current working directory, e.g. theout
directory
Internally, the general philosophy is that the JavaScript API in index.js works exclusively with paths relative to the project toplevel directory. It is then up to callers such as ourbigbook to ensure that filesystem specifics handle the relative paths correctly.
Check the database for consistency, e.g. duplicated IDs. Don't do anything else, including ID extraction, which must have been done previously.
The initial use case was for usage in Parallel builds.
This is the most important option of the software.
It produces a copy of the HTML of cirosantilli.com/china-dictatorship to stdout.
The data is stored inside an NPM package, making it hard to censor that information, see also: cirosantilli.com/china-dictatorship#mirrors
Usage:
ourbigbook --china > china.html
firefox china.html
The
you can just go and inspect the generated HTML to see what would get pushed at:
see also: the
--dry-run
option is a good way to debug the --publish
option, as it builds the publish output files without doing any git commands that would be annoying to revert. So after doing:
ourbigbook --dry-run --publish .
cd out/publish/out/publish/
out
directory.Similar to
--dry-run
, but it runs all git commands except for git push
, which gives a clearer idea of what --publish
would actually do including the git operations, but without publishing anything:
./ourbigbook --dry-run --publish .
Makes includes render the included content in the same output file as the include is located, instead of the default behaviour of creating links.
For example given:
README.bigb
= Index
\Include[notindex]
notindex.bigb
= Notindex
A paragraph in notindex.
== Notindex 2
then for conversion with:
then the output
ourbigbook --embed-includes README.bigb
index.html
contains an output equivalent to if your input file were:
= Index
== Notindex
A paragraph in notindex.
=== Notindex 2
Note that a prior ID extraction pass is not required,
--embed-includes
just makes \Include
read files as they are found in the source.In addition to this:
- cross file references outside the included files are disabled, and the cross file ID database does not get updated.It should be possible to work around this, but we are starting with the simplest implementation that forbids it. TODO at: github.com/ourbigbook/ourbigbook/issues/343The problem those cause is that the IDs of included headers show as duplicate IDs of those in the ID database.This should be OK to start with because the more common use case with
--html-single-page
is that of including all headers in a single document. TODO: this option is gone.
Otherwise,
include
only adds the headers of the other file to the table of contents of the current one, but not the body of the other file. The ToC entries then point to the headers of the included external files.You may want to use this option together with
--embed-resources
to produce fully self-contained individual HTML files for your project.Embed as many external resources such as images and CSS as possible into the HTML output files, rather than linking to external resources.
For example, when converting a simple document to HTML:
index.bigb
= Index
My paragraph.
with:
the output contains references to where OurBigBook is installed in our local filesystem:
The advantage of this is that we don't have to duplicate this for every single file. But if you are giving this file to someone else, they would likely not have those files at those exact locations, which would break the HTML page.
ourbigbook index.bigb
<style>
@import "/home/ciro/bak/git/ourbigbook/_obb/ourbigbook.css";
</style>
<script src="/home/ciro/bak/git/ourbigbook/_obb/ourbigbook_runtime.js"></script>
With
This way, all the required CSS and JavaScript will be present in the HTML file itself, and so readers will be able to view the file correctly without needing to install any missing dependencies.
--embed-resources
, the output contains instead something like:
<style>/*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */html{ [[ ... A LOT MORE CSS ... ]]</style>
<script>/*! For license information please see ourbigbook_runtime.js.LICENSE.txt */ !function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e() [[ ... A LOT MORE JAVASCRIPT ... ]]</script>
The use case for this option is to produce a single HTML file for an entire build that is fully self contained, and can therefore be given to consumers and viewed offline, much like a PDF.
Examples of embeddings done:
- CSS and JavaScript are copy pasted in place into the HTML.The default built-in CSS and JavaScript files used by OurBigBook (e.g. the KaTeX CSS used for mathematics) are currently all automatically downloaded as NPM package dependencies to ourbigbookWithout
--embed-resources
, those CSS and JavaScript use their main cloud CDN URLs, and therefore require Internet connection to view the generated documents.The embedded version of the document can be viewed offline however.There is however a known bug: KaTeX fonts are not currently embedded, so math won't work properly. The situation is similar as for images, but a bit harder because we also need to fetch the blobs from the CSS, which is likely doable from Webpack:
Examples of embedding that could be implemented in the future:
- images are downloaded if needed and embedded as
data:
URLs.Doing this however has a downside: it would slow the page loading down. The root problem is that HTML was not designed to contain assets, and notably it doesn't have byte position indices that can tell it to skip blobs while parsing, and how to refer to them later on when they show up on the screen. This is kind of why EPUB exists: github.com/ourbigbook/ourbigbook/issues/158Images that are managed by the project itself and already locally present, such as those inside the project itself or due tomedia-providers
usually don't require download.For images linked directly from the web, we maintain a local download cache, and skip downloads if the image is already in the cache.To re-download due to image updates, use either:--asset-cache-update
: download all images such that the local disk timestamp is older than the HTTP modification date withIf-Modified-Since
--asset-cache-update-force
: forcefully redownload all assets
Keep in mind that certain things can never be embedded, e.g.:
- YouTube videos, since YouTube does not offer any download API
Always render all selected files, irrespectively of if they are known to be outdated or not.
OurBigBook stores the timestamp of the last successful ID extraction step for each file.
For ID extraction, we always skip the extraction if the filesystem timestamp of a source file is older than the last successful extraction.
For render:
- we mark output files as outdated when the corresponding source file is parsed
- we also skip rendering non-outdated files by default when you invoke ourbigbook on a directory, e.g.
ourbigbook .
, as this greatly speeds up the interactive error fixing turnaround time - we always re-render fully when you specify a single file, e.g.
ourbigbook path/to/README.bigb
However, note that skipping renders, unlike for ID extraction, can lead to some outdated pages.
This option disables the timestamp skip for rendering, so ensure that you will get a fully clean updated render.
E.g. consider if you had two files:
file1.bigb
= File 1
== File 1 1
file2.bigb
= File 2
== File 2 1
\x[file-1-1]
We then do the initial conversion:
we see output like:
indicating full conversion without skips.
ourbigbook .
extract_ids file1.bigb
extract_ids file1.bigb finished in 45.61287499964237 ms
extract_ids file2.bigb
extract_ids file2.bigb finished in 15.163879998028278 ms
render file1.bigb
render file1.bigb finished in 23.21016100049019 ms
render file2.bigb
render file2.bigb finished in 25.92908499762416 ms
But then if we just modify fil1.bigb as:
the following conversion with
and because we skipped
= File 1
== File 1 1 hacked
{id=file-1-1}
ourbigbook .
would look like:
extract_ids file1.bigb
extract_ids file1.bigb finished in 45.61287499964237 ms
extract_ids file2.bigb
extract_ids file2.bigb skipped by timestamp
render file1.bigb
render file1.bigb finished in 41.026930000633 ms
render file2.bigb
render file2.bigb skipped by timestamp
file2.bigb
render, it will still have the outdated "File 1 1" instead of "File 1 1 hacked".We could in principle solve this problem by figuring out exactly which files need to be changed when a given ID changes, and we already have to solve a similar problem due to query bundling. Also, this will need to be done sonner or later for the OurBigBook Web. But lazy now: github.com/ourbigbook/ourbigbook/issues/207, this is hard stuff.
Parse and overwrite the local .bigb OurBigBook Markup input source files with the recommended code format. E.g.:
overwrites
does that for every single file in the current directory.
ourbigbook README.bigb
README.bigb
with the recommended formatting, and:
ourbigbook .
This option uses the
bigb
output format.In order to reach a final stable state, you might need to run the conversion twice. This is not ideal but we don't have the patience to fix it. The reason is that links in image titles may expand twice. This is the usual type of two level recursion that has caused much more serious problems, see e.g.
the first conversion leads to uppercasing inside the image title:
and the second one to uppercasing the reference to the image title:
\x
within title
restrictions. E.g. starting with:
<image my big dog>
\Image[image.png]{title=My <big dog>}
= Big dog
<image my big dog>
\Image[image.png]{title=My <big Dog>}
= Big Dog
<image my big Dog>
\Image[image.png]{title=My <big Dog>}
= Big Dog
The project templates are simple ourbigbook project directories that serve as good starting point for new ourbigbook projects.
They also contain useful examples of OurBigBook Markup usage to help users get started quickly.
Generate one of the template repositories locally:
ourbigbook --generate default
: a good starter template that illustrates many key OurBigBook featuresourbigbook --generate min
: a minimal template that is still saneourbigbook --generate subdir
: a template in which OurBigBook source is located a subdirectorydocs/
:This template illustrates that everything works exactly as if OurBigBook source were in the git repository toplevel.This is a convenient setup for programming projects that want to use OurBigBook for their documentation without polluting their toplevel.
End users almost never want this, because it means that to have a sane setup you need to:so maybe we should just get rid of that option and just ensure that we can provide an up-to-date working template for the latest relase.
- install OurBigBook globally with
npm install -g ourbigbook
- generate the template
- then install OurBigBook locally again with
npm install
For now we are keeping this as it is useful to automate the updating of templates during the release procedure.
You can get an overview of all macros in JSON format with:
ourbigbook --help-macros
Give multiple times to enable a list of certain types of logs to stderr help debugging, e.g.:
Note that this follows commander.js' insane variadic argumentso syntax, and thus the
./ourbigbook --log ast tokens -- README.bigb
--
is required above. If you want to omit it for a single value you have to add the =
sign as in:
./ourbigbook --log=ast README.bigb
Values not documented in other sections:
ast
: the full final parsed abstract syntax tree as JSONast-simple
: a simplified view of the abstract syntax tree with one AstNode or AstArgument per line and showing only the most important fieldsast-pp-simple
: view snapshots of the various abstract syntax tree post process stages, more info at: conversion process overviewast-inside
: print the AST from inside theourbigbook.convert
call before it returns.This is useful to debug the program ifourbigbook.convert
blows up on the next stages before returning.db
: show database transactions done by OurBigBook, to help debug stuff like cross file referencesmem
: show process memory usage as per Node.js'process.memoryUsage()
after each--log perf
step: stackoverflow.com/questions/12023359/what-do-the-return-values-of-node-js-process-memoryusage-stand-for. Implies--log perf
.To use this options, you must run OurBigBook with the--expose-gc
command line option, e.g. with:node --expose-gc $(which ourbigbook) myfile.bigb
parse
: parsing stepstokenize
: tokenization stepstokens
: final parsed token streamtokens-inside
: likeast-inside
but for tokens.Also adds token index to the output, which makes debugging the parser way easier.
This nifty little option outputs to stderr what the header graph looks like!
It is a bit like a table of contents in your terminal, for when you need to have a look at the outline of the document to decide where to place a new header, but are not in the mood to open a browser or use the browser editor with preview.
Sample output excerpt for this document:
= h1 ourbigbook
== h2 1 quick-start
== h2 2 design-goals
=== h3 2.1 saner
=== h3 2.2 more-powerful
== h2 3 paragraphs
== h2 4 links
This option can also serve as a debug tool for header tree related features (confession: that was its original motivation!).
TODO
print performance statistics to stderr, for example
could output:
which shows how long different parts of the conversion process took to help identify bottlenecks.
./ourbigbook --log=perf README.bigb
perf start: 181.33060800284147
perf tokenize_pre: 181.4424349963665
perf tokenize_post: 318.333980999887
perf parse_start: 319.1866770014167
perf post_process_start: 353.5477180033922
perf post_process_end: 514.1527540013194
perf render_pre: 514.1708239987493
perf render_post: 562.834307000041
perf end: 564.0349840000272
perf convert_input_end 566.1234430000186
perf convert_path_pre_sqlite 566.1564619988203
perf convert_path_pre_sqlite_transaction 566.2528780028224
perf convert_path_post_sqlite_transaction 582.256645001471
perf convert_path_end 582.3469280004501
This option can also be useful to mark phases of the conversion to identify from which phase other logs are coming from, e.g. if we wanted to know which part of the conversion is making a ton of database requests we could run:
and we would see the database requests made at each conversion phase.
ourbigbook --log db perf -- README.bigb
Note that
--log perf
currently does not take sub-converts into account, e.g. include and \OurBigBookExample
both call the toplevel conversion function convert
, and therefore go through all the conversion intervals, but we do not take those it account, and just dump them all into the same toplevel interval that they happen in, currently between post_process_start
and post_process_end
.Skip the database sanity check that is normally done after the ID extraction step.
This was originally added to speed up, originally added to speed up the web upload development loop, when we knew that there were no errors in the database after a local conversion, and wanted to get to the upload phase faster, but the DB check can take several seconds for a large input.
But it then later also found usage with Parallel builds followed by a
--check-db-only
.Don't use the ID database during this run. This implies that the on-disk database is not read, and also not written to. Instead, a temporary clean in-memory database is used.
If not given, cross references render with the
.html
extension as in:
<a href=not-readme.html#h2-in-not-the-readme>
This way, those links will work when rendering locally to
.html
files which is the default behaviour of:
ourbigbook .
If given however, the links render without the
which is what is needed for servers such as GitHub Pages, which automatically remove the
.html
as in:
<a href=not-readme#h2-in-not-the-readme>
.html
extension from paths.This option is automatically implied when publishing to targets that remove the
.html
extension such as GitHub pages.Only extract IDs to fill the ID database, don't render. This saves time if you only want to render a single file which has references to other files without getting any errors.
Same as
--no-render
, but for the -W
, --web
upload stage.Web upload consists of two stages:
- extract local ids and render to split ourbigbook files. This can be disabled with
--no-render
- upload to web first on an ID extraction pass, and then a render pass.
--no-web-render
skips that render pass
Set a custom output directory for the conversion.
If not given, the project toplevel directory is used.
Suppose we have an input file
places its output at:
./test.bigb
. Then:
ourbigbook --outdir my_outdir test.bigb
my_outdir/test.html
The same would happen if we instead did a full directory conversion as in:
The output would also be placed in
ourbigbook --outdir my_outdir .
my_outdir/test.html
.This option also relocates the
would generate:
This means that the source tree remains completely clean, and every output and temporary cache is put strictly under the selected
out
directory to the target destination, e.g.:
ourbigbook --outdir my_outdir test.bigb
my_outdir/out
--outdir
.Save the output to a given file instead of outputting to stdout:
./ourbigbook --outfile not-readme.html not-readme.bigb
The generated output is slightly different than that of:
because with
./ourbigbook not-readme.bigb > not-readme.html
--outfile
we know where the output is going, and so we can generate relative includes to default CSS/JavaScript files.Default:
html
output format.The default output format. Web pages!!!
Outputs as OurBigBook Markup, i.e. the same format as the input itself!
While using
-O bigb
is not a common use case, the existence of this format has the following applications:- automatic source code formatting e.g. with
--format-source
. The recommended format, including several edge cases, can be seen in the test file test_bigb_output.bigb, which should be left unchanged by abigb
conversion. - manipulating source code on OurBigBook Web to allow editing either individual sections separatelly, or multiple sections at once
- this could be adapted to allows us to migrate updates with breaking changes to the source code more easily. Alternatively on OurBigBook Web, we might just start storing the AST instead of source, and just rendering the source whenever users want to edit it.
Can be tested interactively with:
ourbigbook --no-db -O bigb --stdout --log=ast-simple test_bigb_output.bigb
One important property of the
bigb
conversion is that is must not alter the AST, and therefore neither the final output, in any way.One good test is:
ourbigbook README.bigb &&
mv out/html/index.html out/html/old.html &&
ourbigbook --format-source README.bigb &&
ourbigbook README.bigb &&
diff -u out/html/old.html out/html/index.html
This was tracked at: github.com/ourbigbook/ourbigbook/issues/83
This output format is used an intermediate step in automatic ID from title, that unlike the regular HTML output does not have any tags.
It does not have serious applications to end users. We decided to expose it from the CLI mostly for fun, as it posed no extra work at all as it is treated internally exactly like any other conversion format.
The
id
output format conversion is very simplistic: it basically just extracts the content
argument of most macros.An important exception to that behaviour is the first argument of the
\x
macro: see \x
id
output format.For example, converting:
with the
instead of the HTML output:
\i[asdf]
id
output format produces simply:
asdf
<i>asdf</i>
This conversion type is useful in situations that users don't expect conversion to produce any HTML tags. For example, you could create a header:
and then following the automatic ID from title algorithm, that header would have the more commonly desired ID
= My \i[asdf]
my-asdf
, and not my-<i>asdf</i>
or my-i-asdf-i
.Similarly, any macro argument that references an ID undergoes
which is equivalent to:
id
output format conversion. E.g. the above header could be referenced by:
<My \i[asdf]>
\x[my-asdf]
Besides being more intuitive, this conversion also guarantees greater format portability, in case we ever decide to support other output formats besides HTML!
Macros that don't have a
content
argument are just completely removed, i.e. typically non-textual macros such as images. We could put effort in outputting their title argument correctly, but meh, not worth the effort.The
id
output format also serves as a good start generalizing OurBigBook to multiple outputs, as this is a simple format.\x
uses href
if the content is not given explicitly.Previously, if
\x
didn't have a content, we were actually rendering the \x
to calculate the ID. But then we noticed that doing so would require another parse pass, so we just went for this simpler approach. This is closely linked to \x
within title
restrictions.For example in:
= Animal
\x[image-i-like-dog]
\Image[dog.jpg]
{title=I \i[like] \x[dog]}
== Dog hacked
{id=dog}
If you wanted
image-i-like-dog-hacked
instead, you would need to explicitly give it as in:
= Animal
\x[image-i-like-dog-hacked]
\Image[dog.jpg]
{title=I like \x[dog][dog hacked]}
== Dog hacked
{id=dog}
For similar reasons as the above,
and not:
{p}
inflection with the \x
p
argument is not considered either, e.g. you would have:
= Animal
\x[image-i-like-dog]
\Image[dog.jpg]
{title=I like \x[dog]{p}}
== Dog
\x[image-i-like-dogs]
This can however be worked around with the
\x
magic
argument as in;
= Animal
\x[image-i-like-dogs]
\Image[dog.jpg]
{title=I like <dogs>}
== Dog
One day, one day. Maybe.
OurBigBook tooling is so amazing that we also take care of the HTML publishing for you!
Once a publish target is properly setup, all you have to do is run:
and your changes will be published to the default target specified in
git add README.bigb
git commit -m 'more content!'
ourbigbook --publish
ourbigbook.json
.If not specified, e.g. with the the
--publish-target
option, the default target is to publish to GitHub Pages.Only changes committed to Git are pushed.
Files that Every other Git-tracked file is pushed as is.
ourbigbook
knows how to process get processed and only their outputs are added to the published repo, those file types are:.bigb
files are converted to.html
.scss
files are converted to.css
When
--publish
is given, stdin input is not accepted, and so the current directory is built by default, i.e. the following two are equivalent:
./ourbigbook --publish
./ourbigbook --publish .
Publishing only happens if the build has no errors.
Like the
--publish
option, but also automatically:git add -u
to automatically add change to any files that have been previously git trackedgit commit -m <commit-message>
to create a new commit with those changes
This allows you to publish your changes live in a single command such as:
ourbigbook --publish-commit 'my amazing change' .
With great power comes great responsibility of course, but who cares!
Attempt to publish without converting first. Implies the
--publish
option.This can only work if there was previously a successful publish conversion done, which later failed to publish during the following steps, e.g. due to a network error.
This option was introduced for debugging purposes to help get the git commands right for large conversions that took a look time.
What type of target to publish for. The generated output of each publish target is stored under:
e.g.:
out/publish/out/<target>
out/publish/out/local
Publish to GitHub Pages. See also: Section 5.5.22.3. "Publish to GitHub Pages".
Publish as a local directory that can be zipped and sent to someone else, and then correctly viewed by a browser locally by the receiver. You can then zip it from the Linux command line for example with:
Maybe we should do the Zip step from the OurBigBook CLI as well. There is no Node.js standard library wrapper however apparently: stackoverflow.com/questions/15641243/need-to-zip-an-entire-directory-using-node-js
ourbigbook --publish --publish-target local
cd out/publish/out
zip -r local.zip local
GitHub pages is the default OurBigBook publish target.
Since that procedure is so important, it is documented directly at: play with the template.
If you want to publish your root user page, which appears at
/
(e.g. github.com/cirosantilli/cirosantilli.github.io for the user cirosantilli
), GitHub annoyingly forces you to use the master
branch for the HTML output:This means that you must place your
.bigb
input files in a branch other than master
to clear up master
for the generated HTML.ourbigbook
automatically detects if your repository is a root repository or not by parsing git remote
output, but you must setup the branches correctly yourself.So on a new repository, you must first checkout to a different branch as in:
or to move an existing repository to a non-master branch:
git init
git checkout -b dev
git checkout -b dev
git push origin dev:dev
git branch -D master
git push --delete origin master
You then will also want to set your default repository branch to
dev
in the settings for that repository: help.github.com/en/github/administering-a-repository/setting-the-default-branchIt's a GitHub bug/feature: github.com/orgs/community/discussions/52252
Maybe we should just ignore the
.github
directory when publishing, otherwise it leads to a broken link on the _dir
directory listings.TODO find some upstream discussion.
Split each header into its own separate HTML output file.
This option allows you to keep all headers in a single source file, which is much more convenient than working with a billion separate source files, and let them grow naturally as new information is added, but still be able to get a small output page on the rendered website that contains just the content of the given header. Such split pages:
- load faster on the browser
- get way better Google PageRank for title hits
- allow for full metadata display, e.g.:
- Header metadata section
- Disqus/Giscus comments
For example given an input file called
a conversion command:
would produce the following output files:
hello.bigb
and containing:
= h1
h1 content.
A link to another section: \x[h1-1].
== h1 1
h1-1 content.
== h1 1 1
h1-1-1 content.
== h1 1 2
h1-1-2 content.
ourbigbook --split-headers hello.bigb
hello.html
: contains the entire rendered document as usual.Remember that this is calledhello.html
instead ofh1.html
because the toplevel header ID is automatically derived from its filename.Each header contains a on-hover link to the single-file split version of the header.hello-split.html
: contains only the contents directly under= h1
, but not under any of the subheaders, e.g.:Theh1 content.
appears in this rendered outputh1-1-1
does not appear in this rendered output
-split
suffix can be customized with the\H
splitSuffix
argument option. The-split
suffix is appended in order to differentiate the output path fromhello.html
h1-1.html
,h1-1-1.html
,h1-1-2.html
: contain only the contents direcly under their headers, analogously tohello-split.html
, but now we don't need to worry about the input filename and collisiont, and just directly use the ID of each header
--split-headers
is implied by the --publish
option: the published website will automatically get the split pages. There is no way to turn it off currently. A pull request would be accepted, especially if it offers a ourbigbook.json
way to do it. Maybe it would be nice to have a more generalized way of setting any CLI option equivalent from the ourbigbook.json
, and an option cli
vs cli-publish
so that cli-publish
is publish only. Just lazy for now/not enough pressing use case met.By default, all cross references point to the non-split version of headers, including those found in split headers.
The rationale for this is that it gives readers the most context around the header by simply scrolling.
For example, considering the example document at The same applies to cross file references when there are multiple input files.
-S
, --split-headers
, cross references such as \x[h1-1]
would point:- from the non-split
hello.html
to the section in the current non-split file#h1-1
- from split
hello-split.html
to the same section in non-split file withhello.html#h1-1
In order to make the split version be the default for some headers, you can use the
\H
splitDefault
argument.This is something that we might consider changing with some option, e.g. keeping the split headers more self contained. But for now, the general feeling is that going to nosplit by default is the best default.
When converting a file, output output to stdout in addition to outputting to a file:
The regular output file is also saved.
convert --stdout input.bigb
Cannot be used when converting a directory.
Select a custom Liquid template file for the output.
If not given, this option defaults to the value of
template
, which if not given defaults to ourbigbook.liquid.html
.The repository of this documentation for example has a sample
ourbigbook.liquid.html
at: ourbigbook.liquid.html.If no template is present, the default template at one point was:
This will get out of sync sooner or later with the code, but this should still serve as a good base example for this documentation.
<!doctype html>
<html lang=en>
<head>
<meta charset=utf-8>
<title>{{ title }}</title>
<style>{{ style }}</style>
</head>
<body class="ourbigbook">
{{ body }}
</body>
</html>
Defined variables:
body
: the rendered bodydir_relpath
: relative path from the rendered output to the_dir
directory. Sample usage to link to the root directory listing:<div><a href="{{ dir_relpath }}{{ html_index }}">Website source code</a></div>
git_sha
: SHA of the latest git commit of the source code if in a git repositorygithub_prefix
: this variable is set only if if the "github" media provider. It points to the URL prefix of the provider, e.g. if you have in yourourbigbook.json
:then you can use media from that repository with:"media-providers": { "github": { "remote": "mygithubusername/media" },
<img src="image/x-icon" href="{{ github_prefix }}/myimage.jpg" />
html_ext
:.html
for local renders, empty for server renders.So e.g. to link to an IDmyid
you can use:<a href="{{ root_relpath }}myid{{ html_ext }}">
This will ideally be replaced with a more generic link to arbitrary ID mechnism at some point: github.com/ourbigbook/ourbigbook/issues/135html_index
:/index.html
for local renders, empty for server rendersinput_path
: path to the OurBigBook Markup source file relative to the project toplevel directory that generated this output, e.g.path/to/myfile.bigb
May be an empty string in the case of autogenerated sources, notably automatic directory listings, so you should always check for that with something like:{% if input_path != "" %} <div>Source code for this page: <a href="{{ raw_relpath }}/{{ input_path }}">{{ input_path }}</a></div> {% endif %}
is_root_relpath
. Boolean. True if the toplevel being rendered on this output file is the the index article. E.g. in:README.bigbwith split header conversion, the value of= John Smith's homepage == Mathematics
is_root_relpath
would be:index.html
: truesplit.html
: truemathematics.html
: false
root_page
: relative path to the toplevel page, e.g. eitherindex.html
,../index.html
locally or./
,../
on server oriented rendereingroot_relpath
: relative path from the rendered output to the toplevel directory.This allows for toplevel resources like CSS to be found seamlessly form inside subdirectories, specially when rendering locally.For example, for the toplevel CSSmain.css
which is generated from main.scss, we can use:<link rel="stylesheet" type="text/css" href="{{ root_relpath }}main.css">
Then, when a file is locally, for example under a subdirectorymysubdir/myfile.html
, OurBigBook will set:giving the desired:root_relpath=../
<link rel="stylesheet" type="text/css" href="../main.css">
And if the output path were instead justmyohterfile.html
,root_relpath
expands to an empty string, giving again the correct:<link rel="stylesheet" type="text/css" href="main.css">
This will ideally be replaced with a more generic link to arbitrary ID mechnism at some point: github.com/ourbigbook/ourbigbook/issues/135raw_relpath
: relative path from the rendered output to the_raw
directory. Should be used to prefix all non-OurBigBook Markup output resources, which is the directory where such files are placed during conversion, e.g.<link rel="shortcut icon" href="{{ raw_relpath }}/logo.svg" />
file_relpath
: similar toraw_relpath
, but link to the_file
output directory insteadstyle
: default OurBigBook stylesheetstitle
We pick Liquid because it is server-side safe: if we ever some day offer a compilation service, Liquid is designed to prevent arbitrary code execution and infinite loops in templates.
ourbigbook.liquid.html
is the default template file name used for --template
as mentioned at template
.true
iff the --publish-target
is a standard website, i.e. something that will be hosted publicly on a URL. This is currently true
for the following publish targets:--publish-target github-pages
false
for the following targets:--publish-target local
This template variable is useful to remove JavaScript elements that only work on public websites and not on
localhost
or file:
, e.g.:- Google Analytics
- Giscus
Read tiles from stdin line by line on a while loop and output IDs to stdout only, performing automatic ID from title conversion on each input line.
Sample usage:
outputs:
each with one second intervals between each line.
( echo 'Hello world'; sleep 1; echo 'C++ is great'; sleep 1; echo 'β Centauri' ) | ourbigbook --title-to-id
hello-world
c-plus-plus-is-great
beta-centauri
The original application of this option was to allow external non Node.js processes to be able to accurately calculate IDs from human readable titles since the non-ASCII handling of the algorithm is complex, and hard to reimplement accurately.
From Python for example one may run something like:
from subprocess import Popen, PIPE, STDOUT
import time
p = Popen(['ourbigbook', '--title-to-id'], stdout=PIPE, stdin=PIPE)
p.stdin.write('Hello world\n'.encode())
p.stdin.flush()
print(p.stdout.readline().decode()[:-1])
time.sleep(1)
p.stdin.write('bonne journeé\n'.encode())
p.stdin.flush()
print(p.stdout.readline().decode()[:-1])
This option enables actions that would allow arbitrary code execution, so you should only pass it if you trust the repository author. Enabled functionality includes:
Don't quit
ourbigbook
immediately.Instead, watch the selected file or directory for changes, and rebuild individual files when changes are detected.
Watch every
When a directory is given as the input path, this automatically first does an ID extraction pass on all files to support cross file references.
.bigb
file in an entire directory:
ourbigbook --watch .
Now you can just edit any OurBigBook file such has
README.bigb
, save the file in your editor, and refresh the webpage and your change should be visible, no need to run a ourbigbook
command explicitly every time.Exit by entering Ctrl + C on the terminal.
Watch a single file:
When a single file is watched, the reference database is not automatically updated. If it is not already up-to-date, you should first update it with:
otherwise you will just get a bunch of undefined ID errors every time the input file is saved.
ourbigbook --watch README.bigb
ourbigbook .
TODO: integrate Live Preview: asciidoctor.org/docs/editing-asciidoc-with-live-preview/ to also dispense the browser refresh.
Sync local directory to OurBigBook Web instead of doing anything else.
To upload the entire repository, run from toplevel:
ourbigbook --web
To update just all IDs in a single
This requires that all external IDs that
physics.bigb
source file use:
ourbigbook --web physics.bigb
physics.bigb
might depend on have already been previously uploaded, e.g. with a previous ourbigbook --web
from toplevel.The source code is uploaded, and conversion to HTML happens on the server, no conversion is done locally.
This option is not amazing right now. It was introduced mostly to allow uploading the reference demo content from cirosantilli.com to ourbigbook.com/cirosantilli, and it is not expected that it will be a major use case for end users for a long time, as most users are likely to just edit on OurBigBook Web directly.
Some important known limitations:
- every local file has to be uploaded every time to check if it needs rebuilding or not by comparing old vs new file contents. At Store SHA of each article + descendants and skip API re-renders for entire subtrees we describe a better Git-like Merkle tree method where entire unchanged subtress can be skipped, that will be Nirvana.
- file renaming does not work, it will think that you are creating a new file and blows up duplicates
- if there's an error in a later file, the database is still modified by the previous files, i.e. there is no atomicity. A way to improve that would be to upload all files to the server in one go, and let the server convert everything in one transaction. However, this would lead to a very long server action, which would block any other incoming request (I tested, everything is single threaded)
However, all of those are fixable, and in an ideal world, will be fixed. Patches welcome.
If you delete a header locally and then do
-W
, --web
upload, the article is currently not removed from web.Instead, we simply make its content become empty, and mark it as unlisted.
The reason for this is that the article may have metadata created by other users such as OurBigBook Web discussions, which we don't want to delete remove.
In order to actually remove the header you should follow the procedure from Section 7.1.5. "OurBigBook Web page renaming", which instead first moves all discussions over to a new article before deleting.
It is possible to mark articles as unlisted in OurBigBook Web.
This also happens automatically when doing
-W
, --web
upload for previously published articles that were deleted locally, see also: Section 5.5.29.1. "Local header deletion on web upload"Marking an article as unlisted makes it not show up by default on article listings including:Unlisted articles do however appear on listings such as:
- as descendants of an article when seen on the each article page, i.e. on the Table of contents and below due to dynamic article tree
- on global or per-user listings of latest and top article
- on topics
- lists of articles liked by given users
Ask for the password in an interactive terminal in case there was a default password that would have otherwise been chosen.
Currently the only case where this happens is
--web-test
which automatically sets a default --web-password asdf
.-W
, --web
dry run, skip any operations that would interact with the OurBigBook Web server, doing only all the local preparation required for upload.This is mostly useful for testing the OurBigBook CLI.
Upload only the selected ID with
-W
, --web
.That ID must belong to a file being converted for everything to work well. e.g.:
ourbigbook --web --web-id quantum-mechanics physics.bigb
Force ID extraction on
-W
, --web
, even if article content is unchanged.The only use case so far for this has been as a hack for incomplete database updates.
The correct approach is instead to actually re-extract server side as part of the migration. We should do this by implementing a
Article.reextract
analogous to Article.rerender
, and a helper web/bin/rerender-articles.js.Force remote render of
-W
, --web
, don't skip it if even if the render is believed to be up-to-date with source.This is analogous to -
F
, --force-render
.--web-force-render
does not skip the local pre-conversion to split bigb
format that is done before upload, only the remote render. Conversely, when used together with -W
, --web
, -F
, --force-render
does wkip the local bigb conversion, and not the remove one.Render up to a maxinum of N articles.
Useful for quick and dirty OurBigBook Web performance benchmarking, especially together with
--web-force-render
to avoid skipping over finished files.This option was originally introduced to hep testing bulk nested set updates.
Only update the nested set index after all articles have been uploaded.
There is a complex time tradeoff between using this option or not, which depends on:
- how many articles the user has
- how many articles are being uploaded
This option was initially introduced for Wikipedia bot uploads. At 104k articles, the bulk update takes 1 minute, but each individual update of an empty article takes about 6 seconds (and is dominated by the nested set update time), making this option an indispensable time saver for the initial upload in that case
Therefore in that case, for less than 10 articles you are better off without this option. But with more thatn 10 articles you would want to use it.
This rule of thumb should scale for smaller deployments as well however. E.g. at 10k articles, both individual updates and bulk updates should be 10x faster, so the "use this option for 10 or more articles" rule of thumb should still be reasonable.
Set password from CLI. Really bad idea for non-test users with fixed dummy passwords due e.g. to Bash history.
Set defaults for
is equivalent to:
You can also override those defaults by just specifying them normally, e.g. to do a different user:
--web-*
options that are useful for testing locally:
ourbigbook --web-test
ourbigbook --web --web-url http://localhost:3000 --web-user barack-obama --web-password asdf
ourbigbook --web-test --web-user donald-trump
Set a custom URL for
-W
, --web
from the command line. If not given, the canonical ourbigbook.com is used. This option is used e.g. for testing locally e.g. with:
ourbigbook --web --web-url http://localhost:3000
Also consider
--web-url
for local testing.Set the username for
If not given:
-W
, --web
from the command line, e.g.:
ourbigbook --web --web-url http://localhost:3000 --web-user barack-obama
- use the latest previous successfull web login with
ourbigbook --web
if there are any. In that case, the CLI informs you with a message of type:Using previous username: barack-obama
- otherwise, you will be prompted for it from the command line.
OurBigBook configuration file that affects the behaviour of ourbigbook for all files in the directory.
ourbigbook.json
not used for input from stdin, since we are mostly doing quick tests in that case.While
ourbigbook.json
is optional, it is used to determine the toplevel directory of a OurBigBook project, which has some effects such as those mentioned at the toplevel index file.Therefore, it is recommended that you always have a
ourbigbook.json
in your project's toplevel directory, even if it is going to be an empty JSON containing just:
{}
For example, if you convert a file in a subdirectory such as:
then and so on.
ourbigbook subdir/notindex.bigb
ourbigbook
walks up the filesystem tree looking for ourbigbook.json
, e.g.:- is there a
./subdir/ourbigbook.json
? - otherwise, is there a
./ourbigbook.json
? - otherwise, is there a
../ourbigbook.json
? - otherwise, is there a
../../ourbigbook.json
?
If we reach the root path
/
and no ourbigbook.json
is found, then we understand that there is no ourbigbook.json
file present.List of JavaScript regular expression. If a file path matches any of them, then override
ignore
and don't ignore the path. E.g., if you have several .scss
examples that you don't want to convert, but you do want to convert the main.scss
for the website itself:
"ignore": [
".*\\.scss"
]
"dontIgnore": [
"main.scss"
]
Note however that if an upper directory is ignored, then we don't recurse into it, and
dontIgnore
will have no effect.List of paths relative to the project toplevel directory that OurBigBook CLI will ignore, unless it also has a match in
dontIgnore
.Each entry is a JavaScript regular expression, and it must match the entire path from start to end to count.
If a directory is ignored, all its contents are also automatically ignored.
Useful if your project has a large directory that does not contain OurBigBook sources, and you don't want OurBigBook to mess with it.
Only ignores recursive conversions, e.g. given:
doing:
skips that directory, but
converts it because it was explicitly requested.
"ignore": [
"web"
]
ourbigbook .
ourbigbook web/myfile.bigb
Examples:
- ignore all files with a given extension;
Yes, it is a bit obnoxious to have to escape
"ignore": [ ".*\\.tmp", ]
.
and the backslash. We should use some proper globbing library like: github.com/isaacs/node-glob. But on the other hand ignore from.gitignore
makes this mostly useless, as.gitignore
will be used most of the time.
TODO: also ignore during
-w
, --watch
.Similar to
ignore
, but only ignore the files from rendering converesions such as bigb -> html, scss -> css.Unlike
ignore
, matching files are still placed under the _raw
directory and can be publicly viewed.You almost always want this option over
ignore
, with files that should not be in the repository being just ignored with your .gitignore
instead: Section 5.8.1. "Ignore from .gitignore
".Dictionary of options that control automatic ID from title generation.
If
true
, does Latin normalization on the title.Default:
true
.ASCII normalization is a custom OurBigBook defined normalization that converts many characters that look like Latin characters into Latin characters.
For now, we are using the
deburr
method of Lodash: lodash.com/docs/4.17.15#deburr, which only affects Latin-like characters.In addition to
deburr
we also convert:- en-dash and em-dash to simple ASCII dash
-
. Wikipedia Loves en-dashes in their article titles! - greek letters are replaced with their standard latin names, e.g.
α
toalpha
One notable effect is that it converts variants of ASCII letters to ASCII letters. E.g.
é
to e
removing the accent.This operation is kind of a superset of Unicode normalization acting only on Latin-like characters, where Unicode basically only removes things like diacritics.
OurBigBook normalization on the other also does other natural transformations that Unicode does not do, e.g.
æ
to ae
as encoded by deburr
and further custom replacements.TODO
lodash.deburr
:- only deals with Unicode blocks "Latin-1 Supplement" and "Latin Extended-A", notably missing Latin Extended-B, C and D, which contain some important characters. Pull requests have been ignored:so maybe we should just code our own on top.
- misses some candidates in letterlike symbols
- mathematical operators block
Bibliography:
If
true
, does Punctuation normalization on the title.Default:
true
.Some selected punctuation marks are automatically converted into their dominant corresponding pronunciations. These are:
%
:percent
&
:and
+
:plus
@
:at
−
(Unicode minus sign, U+2212, distinct from the ASCII hyphen):minus
Dashes are added around the signs if needed, e.g.:
C++
:c-plus-plus
Q&A
:q-and-a
Folding@home
:folding-at-home
Dictionary of lint options to enable. OurBigBook tries to be strict about forcing specific styles by default, e.g. forbids triple newline paragraph. But sometimes we just can't bear it :-)
Possible values:
parent
: forces headers to use\H
parent
argument to specify their levelnumber
: forces headers to not use\H
parent
argument to specify their level, i.e. to use a number or a number of=
You should basically always set either one of those on any serious project. Forgetting a
parent=
in a project that uses parent=
everywhere else is a common cause of build bugs, and can be hard to debug without this type of linting enabled.Possible values:
child
: forbids headers from using the\H
tag
argument. They should instead use the\H
child
argument.tag
: forbids headers from using the\H
child
argument. They should instead use the\H
tag
argument.
This dictionnary stores options related to headers.
Sets the default
\H
numbered
argument argument of the toplevel headers of each source file.Note that since the option is inherited by descendants, this can also affect the rendering of ancestors.
github.com/ourbigbook/ourbigbook/issues/188 contains a proposal to instead inherit this property across includes.
If you set this ourbigbook.json option:
it is possible to override it for a specific file with and explicit
{
"h": {
"numbered": true
}
}
=0
\H
numbered
argument:
= Not numbered exception
{numbered=0}
== Child also inherits not numbered
Make every link to something that is not on the current page open on a new tab instead of the current one, i.e. add
target="_blank"
to such links.This options is exactly analogous to the
numbered
option, but it affects the \H
splitDefault
argument instead of the \H
numbered
argument.If given, the toplevel output of each input source is always non-split, and a split version is not generated at all.
This of course overrides the
\H
splitDefault
argument for toplevel headers, making any links go to the non split version, as we won't have a split version at all in this case.E.g.:
ourbigbook.json
{
"h": {
"splitDefault": true,
"splitDefaultNoToplevel": true,
}
}
my-first-header.bigb
= My first header
== My second header
When converted with:
would lead only to two output files:
ourbigbook --split-headers my-first-header.bigb
- my-first-header: not split
- my-second-header: split
Without
splitDefaultNoToplevel
we would instead have:- my-first-header: split
- my-first-header-nosplit: not split
- my-second-header: split
The initial use case for this was in OurBigBook Web. If we didn't do this, then there would be two versions of every article at the toplevel of a file: split and nosplit.
This would be confusing for users, who would e.g. see two new articles on the article index every time they create a new one.
It would also mean that metadata such as comments would be visible in two separate locations.
So instead of filtering the duplicate articles on every index, we just don't generate them in the first place.
If
false
, implies --no-html-x-extension
.The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
The
media-providers
entry of ourbigbook.json
specifies properties of how media such as images and videos are retrieved and rendered.The general format of
media-providers
looks like:
"media-providers": {
"github": {
"default-for": ["image"], // "all" to default for both image, video and anything else
"path": "data/media/", // data is gitignored, but should not be nuked like out/
"remote": "ourbigbook/ourbigbook-media"
},
"local": {
"default-for": ["video"],
"path": "media/",
},
"youtube": {}
}
Properties that are valid for every provider:
default-for
: use this provider as the default for the given types of listed macros.The first character of the macros are case insensitive and must be given as lower case. Therefore e.g.:image
applies to bothimage
andImage
- giving
Image
is an error because that starts with an upper case character
title-from-src
(bool
): extract thetitle
argument from thesrc
by default for media such as images and videos as if thetitleFromSrc
macro argument had been given, see also: Section 4.2.8.1. "Image ID"
Direct children of media-providers and subproperties that are valid only for them specifically:
local
: tracked in the current Git repository as mentioned at Section 4.2.8.2.1. "Store images inside the repository itself"path
: location of the cloned local repository relative to the root the main repository
github
: tracked in a separate Git repository as mentioned at Section 4.2.8.2.2. "Store images in a separate media repository"path
: analogous topath
forlocal
: a local location for this GitHub provider, where the repository can optionally be cloned.When not during a run with the--publish
option, OurBigBook checks if the path exists locally, and if it does, then it uses that local directory as the source intead of the GitHub repository.This allows you to develop locally without Internet and see the latest version of the images without pushing them.During publishing, the GitHub version is used instead.TODO make this even more awesome by finishing to implement github.com/ourbigbook/ourbigbook/issues/184:- automatically
git push
this repository during deployment to ensure that any asset changes will be available. - ignore the path from OurBigBook conversion as if added to
ignore
, and is not added to the final output, because you are already going to have a copy of it.This way you can use the sanes approach which is to track the directory as a Git submodule as mentioned at: store images in a separate media repository and track it as a git submodule, instead of either:- keeping it outside of the repository
- keeping it in the repository but explicitly ignoring it as well, which is a bit redundant
- automatically
remote
:<github-username>/<repo-name>
youtube
: YouTube videos
Default:
true
For example with:
then
would be place its output under:
instead of
{
"outputOutOfTree": false
}
ourbigbook hello.bigb
hello.html
out/html/hello.html
.Advantages of Disadvantages:
outputOutOfTree=true
:- the source tree becomes cleaner, especially when using
-S
,--split-headers
which can produce hundreds of output files from a single input file - if you want to track several
.html
source files in-tree, you don't need to add an exception to each of of them on the.gitignore
as:*.html !/ourbigbook.liquid.html
- you have to type more to open each output file on the terminal
This option is always forced to
false
when --outdir <outdir>
is given.Implemented at: github.com/ourbigbook/ourbigbook/issues/163
Path of a script that gets executed after conversion, and before upload, when running with the
--publish
option.The script arguments are:
- the publish output directory.That directory is guaranteed to exist when
prepublish
is called.Forgit
-based publish targets, all files are almost ready in there, just waiting for agit add .
that followsprepublish
.This means that you can use this script to place or remove files from the final publish output.
If the
prepublish
script returns with a non-zero exit value, the publish is aborted.If given, use this fixed date as the author and comitter date of the publish commit.
All Git date formats are accepted as documented in
man git-commit
, e.g. 2005-04-07T22:13:13
.Options that should be used only on the published output when publishing with the
--publish
option.These options are merged directly into the options of the
convert
function.One example usage is to redirect all links of your static website to your OurBigBook Web profile:
"publishOptions": {
"htmlXExtension": false,
"ourbigbook_json": {
"toSplitHeaders": true,
"xPrefix": "https://ourbigbook.com/cirosantilli/"
}
},
If given these options override pre-existing options on the published output.
A custom
remoteUrl
to push build outputs to.If not given this value is extracted by default from the
origin
remote of the Git repository were the source code is located in.Generate custom redirects.
For example:
produces a file in the output called
"redirects": [
["cirodown", "ourbigbook"]
],
cirodown.html
that redirects to ourbigbook.html
.Absolute URLs are also accepted, e.g.:
produces a file in the output called
"redirects": [
["ourbigbook", "https://docs.ourbigbook.com"]
],
ourbigbook.html
that redirects to https://docs.ourbigbook.com
.When dealing with regular headers, you generally don't want to use this option and instead use the
\H
synonym
argument, which already creates the redirection for you.This JSON option can be useful however for dealing with things that are outside of your OurBigBook project.
For example, at one point, this project renamed the repository github.com/cirosantilli/cirodown to github.com/ourbigbook/ourbigbook.
Unfortunately, GitHub Pages does not generate redirects like github.com itself.
So in this case, we've added to the
which produces a file in the output called
ourbigbook.json
of the toplevel user repository github.com/cirosantilli/cirosantilli.github.io the lines:
"redirects": [
["cirodown", "ourbigbook"]
],
cirodown.html
that redirects to ourbigbook.html
.In this case,
cirodown
and ourbigbook
don't have to be any regular IDs present in the database, those strings are just used directly.TODO ideally we should check for conflicts with regular output from split headers IDs or their synonyms. But lazy.
Select the template Liquid file to use.
Serves as the default for the
template
.If this option is not given, and if a file ourbigbook.liquid.html exists in the project, then that file is used.
If ourbigbook.liquid.html exists but you don't want to use it, set the option to
null
and it won't be used.Make every internal cross reference point to the split header version of the pages of the website. Do this even if those pages don't exist, or if they are not the default target e.g. as per the
\H
splitDefault
argument.The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
If this option is set, then nosplit/split header metadata links are removed, since it was hard to come up with a sensible behaviour to them, and it won't matter on web redirection where every page is nonsplit anyways.
This dict contains options related to interaction between OurBigBook CLI and OurBigBook Web deployments.
Select the default host used forDefaults to
web
publishing. Serves as the default domain for--web-url
- static website where it is used, e.g.
linkFromStaticHeaderMetaToWeb
ourbigbook.com
, the reference Web instance.Capitalized version of
host
, e.g. OurBigBook.com
.Default:
- if
host
is given, use it - otherwise,
OurBigBook.com
Shows up on
linkFromStaticHeaderMetaToWeb
as a potentially more human readable version of the hostname.Type: boolean. Default:
false
.If
true
, adds a link under the metadata section of every header of a OurBigBook CLI static website pointing to the corresponding article on OurBigBook.com, or another OurBigBook Web instance specified by the host
option.It also sends you to Heaven for supporting the project.
This option requires
username
to be set.For example, if you set:
then in the rendering of a README.bigb:
those headers would have a metadata entry pointing respectively to:
"web": {
"username": "myusername",
"linkFromStaticHeaderMetaToWeb": true
}
= Index
== My h2
{scope}
=== My h2 2
{scope}
https://ourbigbook.com/myusername
https://ourbigbook.com/myusername/my-h2
https://ourbigbook.com/myusername/my-h2/my-h2-2
In order for such links not to be broken, you should always first do a Web upload to ensure that the articles are present on OurBigBook.com.
Previously named
linkFromHeaderMeta
.Type: string.
Sets your OurBigBook.com username. This is used e.g. by
linkFromStaticHeaderMetaToWeb
.If given, prepend the given string to every single internal cross file reference output.
The initial application of this option was to Section 5.7. "Redirect from a static website to a dynamic website".
E.g. suppose that you previously had at
myoldsite.com
you had:animal.bigb
= Animal
<Dogs> don't eat <bananas>.
== Dog
plant.bigb
Originally that would render as:
= Plant
== Banana
<a href="#dog">Dogs</a> don't eat <a href="plant#banana">bananas</a>.
But then if you set in
it will instead render as:
where:
ourbigbook.json
:
{
"xPrefix": "https://mynewsite.com/"
}
<a href="#dog">Dogs</a> don't eat <a href="https://mynewsite.com/plant#banana">bananas</a>.
- dogs: untouched as it links to the same page as the current one
- bananas: the prefix is added, as it is on another page
Scopes are automatically resolved so that they will also be present in the target. E.g. in:
subdir/notindex.bigb
<notindex2>
subdir/notindex2.bigb
= Notindex2
we get on
and not:
subdir/notindex.html
:
<a href="https://mynewsite.com/subdir/notindex2.html">
<a href="https://mynewsite.com/notindex2.html">
This section describes how to generate mass redirects from a static website such as cirosantilli.com to a OurBigBook Web dynamic website such as ougbook.com/cirosantilli.
The use case of this is if you are migrating from one domain to another, and want to keep old files around to not break links, but would rather redirect users to the new preferred pages instead to gather PageRank there.
This happened in our case when Ciro felt that OurBigBook Web had reach enough maturity to be a reasonable reader alternative to the static website.
Basically what you want to do in that case is to use the following options:as in:
"publishOptions": {
"toSplitHeaders": true,
"htmlXExtension": false,
"xPrefix": "https://ourbigbook.com/cirosantilli/"
},
The following files are ignored from conversion:Note that this applies even if you try to convert a single ignored file such as:
We are strict about this in order to prevent accidentally polluting the database with temporary data.
ignore
patterns- gitignored files: Section 5.8.1. "Ignore from
.gitignore
" - a few hardcoded basenames, such as
.git
and theout
directory, seeDEFAULT_IGNORE_BASENAMES
in ourbigbook
ourbigbook ignored.bigb
If the project is a Git tracked project, the standard
git
ignore rules are used for ignores. This includes .git/info/exclude
, .gitignore
and the user's global gitingnore file if any.TODO: get this working. Maybe we should also bake it into the
ourbigbook
CLI tool as well for greater portability. Starting like this as a faster way to prototype:
rm -rf out/parallel
mkdir -p out/parallel
# ID extraction.
git ls-files | grep -E '\.bigb$' | parallel -X ourbigbook --no-render --no-check-db --outdir 'out/parallel/{%}' '{}'
./merge-dbs out/db.sqlite3 out/parallel/*/db.sqlite3
ourbigbook --check-db
# Render.
git ls-files | grep -E '\.bigb$' | parallel -X ourbigbook --no-check-db '{}'
Observed
--no-render
speedup on 1k small files from the Wikipedia bot and 8 cores: 3x. So not bad.Observed render speedup on 1k small files from the Wikipedia bot and 8 cores: none. TODO. Is this because of database contention?
The main entry point for the JavaScript API is the
ourbigbook.convert
function.An example can be seen under lib_hello.js.
Note that while doing a simple conversion is easy, things get harder if you want to take multi-file features in consideration, notably cross file reference internals.
This is because these features require interacting with the ID database, and we don't do that from the default
ourbigbook.convert
API because different deployments will have very different implementations, notably:- local Node.js run uses SQLite, an implementation can be seen in the ourbigbook file class
SqlDbProvider
- the in-browser version that runs in the browser editor of the OurBigBook Web makes API calls to the server
These are variables that affect the OurBigBook Library itself, and therefore also get picked up by OurBigBook CLI and OurBigBook Web.
For boolean environment variables, the value of "true" should use be
Every other value is considered false, including e.g.
1
, e.g. as in:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
true
.The
convert
JavaScript function of the OurBigBook Library is the central OurBigBook Markup conversion function of the OurBigBook Project.That function converts an input string to various output formats, and has many options.
It is by both OurBigBook CLI and OurBigBook Web.
OurBigBook Web is the software program that powers OurBigBook.com, its official flagship instance, see also: Section 7.1.8. "OurBigBook.com". OurBigBook Web is currently the main tool developed by the OurBigBook Project.
This section contains both end user documentation and developer documentation.
More subjective rationale, motivation and planning aspects of the project are documented at: cirosantilli.com/ourbigbook-com.
OurBigBook Web is a bit like Wikipedia, but where each user can have their own version of each page, and it cannot be edited by others without permission.
And it is also a bit like Obsidian (a personal knowledge base): you can optionally keep all your notes in plaintext markup files in your computer and publish either on OurBigBook.com or as a static HTML website on your own domain.
The goal of the OurBigBook Project is to make university students write perfect natural sciences books for free as they are trying to learn for their lectures.
Suppose that Mr. Barack Obama is your calculus teacher this semester.
Being an enlightened teacher, Mr. Obama writes everything that he knows on his OurBigBook.com account. His home page look something like the following tree:
- ourbigbook.com/barack-obama: Obama's toplevel index pages linking to all his pages
On your first day of class, Mr. Obama tells his students to read the "Calculus" section, ask him any questions that come up online, and just walks away. No time wasted!
While you are working through the sections under "Calculus", you happen to notice that the "Fundamental theorem of calculus" article is a bit hard to understand. Mr. Obama is a good teacher, but no one can write perfect tutorials of every little thing, right?
This is where OurBigBook comes to your rescue. There are two ways that it can help you solve the problem:
Topics group articles that have the same title by different users. This feature allows you to find the best article for a given topic, and it is one of the key innovations of OurBigBook Web.
Topics are a bit like Twitter hashtags or Quora questions: their goal is to centralize knowledge about a specific subject by different people at a single location.
If even existing topics and discussions have failed you, and you have finally understood a subject after a few hours of Googling, why not share your knowledge by creating a new article yourself?
There are a few ways to do that.
OurBigBook Web implements what we call "dynamic article tree".
What this means is that, unlike the static website generated by OurBigBook CLI where you know exactly which headers will show as children of a given header, we just dynamically fetch a certain number of descendant pages at a time.
As an example of dynamic artic tree, note how the article "Special relativity" can be seen in all of the following pages:The only efficient way to do this is to pick which articles will be rendered as soon as the user makes the request, rather than having fully pre-rendered pages, thus the name "dynamic".
- ourbigbook.com/barack-obama/special-relativity as the toplevel article
- ourbigbook.com/barack-obama/physics as a child
- ourbigbook.com/barack-obama/natural-science as the child of a child
The design goals of the dynamic article tree are to produce articles such that:
- each article can appear as the toplevel article of a page to get better SEO opportunities
- and that page that contains the article can also contain as many descedants as we want to load, not jus the article itself, so as to not force readers to click a bunch of links to read more
For example, with a static website, a user could have a page structure such as:
natural-science.bigb
= Natural science
== Physics
\Include[special relativity]
special-relativity.bigb
= Special relativity
== Lorentz transformation
In the static output, we would have two output files with multiple pages:plus one split output file for each header if
natural-science.html
special-relativity.html
-S
, --split-headers
were enabled:natural-science-split.html
physics.html
special-relativity-split.html
lorentz-transformation.html
In this setup the header "Physics" for example is present in one of two possible pages:
natural-science.html
: as a subheader, but Special Relativity is not shown even though it is a childphysics.html
: as the top header, and Special Relativity is still not shown as we are in split mode
In the case of the dynamic article tree however, we achieve our design goals:We then just cut off at 100 articles to not overload the server and browsers on very large pages. Sometimes those pages can still be accessed through the ToC, which has a larger limit of 1000 entries. We also want to implement: load more articles to allow users to click to load more articles.
- "Physics" is the toplevel header, and therefore can get much better SEO
- "Special Relativity", "Lorentz transformation" and any other descendants will still show up below it, so it is much more readable than a page
And all of that is achieved:
- without requiring authors to manually determine which headers are toplevel or not to customize page splits with reasonable load sizes.
- without keeping multiple copies of the render output of each page and corresponding pre-rendered ToCs. On the static website, we already had two rendering for each page: one split and one non-split, and the ToCs were huge and copied everywhere. Perhaps the ToC side could be resolve with some runtime fetching of static JSON, but then that is bad for SEO.
The downside of the feature is slightly slower page loads and a bit more server workload. We have kept it quite efficient server-side by implementing the page fetching with a nested sets implementation.
We believe that dynamic article treee offers a very good tradeoff between server load, load speeds, SEO, readability and author friendliness.
Each article has their own discussion section. This way you can easily see if other students have had the same problem as you and asked about it already.
OurBigBook Web comes with a browser text editor where users can create and edit their articles in OurBigBook Markup.
This is for example the editor you see when creating a new article at: ourbigbook.com/go/new
One day want to add an option to have a visual editor: Section 12.15.6.1.54. "WYSIWYG", but for now we'll try to make the text-editor as awesome as we can.
Marking a page as the child of another page is easy in OurBigBook Web: you can simply set the parent of the page directly on the editor UI.
If you don't want the article to be the first child of a parent, you can also set the "previous sibling" field. This specifies after which article the new article will be inserted.
The current setup works as follows.
Suppose you have a page titled:
and therefore with an ID
Calculus
calculus
that appears under: ourbigbook.com/barack-obama/calculusSuppose you want to rename it to "Calculus 2" to have an ID of
calculus-2
.The procedure is:
- set title to
Caculus 2
- set
Calculus
as a synonym of the article, but adding to the top of the article body:Calculus {synonym}
As a result of this:
- the page will now be hosted under ourbigbook.com/barack-obama/calculus
- ourbigbook.com/barack-obama/calculus-2 becomes a redirect to ourbigbook.com/barack-obama/calculus
- articles that point with an internal cross reference to
calculus
are unmodified, and still link to: ourbigbook.com/barack-obama/calculus. But that link works due to the redirect.
This is not super user friendly, and could be made better by:
- moving
synonym
from source to widgets: move all header metadata from source to HTML in Webs - actually update all references on other files to the new value. This could be done e.g. by creating a worker thread, and mark all references as outdated.
- Global:
- Ctrl + Enter: submit the current form, e.g. save/create articles or comments, login, register
- TODO N: create a new article. This requires making sure that all input fields and textareas don't propagate N key events. We did that as a one off for E in comment textareas.
- Page specific:
- Article page (and for Index page on user page):
- E: edit the page
- L: like or unlike the page. This would require moving like state out of the Like button, which is a bit annoying.
- Article page (and for Index page on user page):
There are currently a few constructs that are legal in OurBigBook CLI but forbidden in Web and will lead to upload errors. TODO we should just make those forbidden on CLI by default with a flag to re-enable if users really want to make their source incompatible with web:
- IDs with uppercase characters as per JavaScript case conversion
OurBigBook.com is the reference public instance of OurBigBook Web!
This section describes setup and policies specific to that instance, and which don't necessarily apply to other instances people may host elsewhere.
This session describes policies specific to the OurBigBook.com instance of OurBigBook Web.
Documentation present under OurBigBook Web user manual describes OurBigBook Web in general.
These policies only applies to the official reference OurBigBook.com instance. If you host your own OurBigBook Web, there are no constraints imposed on your content, only on the source code as per LICENSE.txt.
All content that you upload that you own copyright for is automatically dual licensed as under the Creative Commons CC BY-SA 4.0. This is for example the same license family used by Wikipedia.
Starting from August 22 2024, users also automatically grant to the OurBigBook Project a non-exclusive license to relicense their content. This could be used for example to:
- sell the content to companies that do not wish to comply with the CC BB-SA license, e.g. for LLM training. We will try to avoid ever doing this as much as possible since it goes against the vision of the project for open knowledge. But it could one day be the difference between life and death of the project, so we'd like to keep that door open just in case. Any such relicensing deals will be transparently announced.
- add a new license to content on the website which we feel might better serve all users
Any such relicensing does not affect the original CC BY-SA 4.0 license nor your ownership of the content. It only adds new licenses on top of it. This way the content remains free no matter what.
If you don't own the copyright for a work, you may still upload it if its license allows for "perpetual (non-expiring) and non-revocable" usage. This allows for example for:and so on.
- all Creative Commons licenses
- GNU General Public License
Note however that the "non-commercial" (NC) and "no derivatives" (ND) CC license are basically legal minefields as it can be very subjective to decide what counts as commercial or a derivative, and so we will immediately take down material upon copyright owner request as we are not ready to test this in court!
For example:
- it has not yet been decided if the OurBigBook Project will be ran as a not for profit or for profit organization. If a for-profit model is chosen, NC copyright owners could feel that their content being merely hosted on ourbigbook.com might constitute a for-profit usage as it could help bring publicity to the site.The project makes the following commitment however: if ever a way if found to make money from the project, all NC content will be excluded from any directly monetizable money-making activities, e.g. ads or otherwise.
- which of the following consist of a derivative or not:
- a table of contents that mirrors a ND work, but without the actual contents, which would automatically be filled with "the most upvoted article in a given topic"
- a section of ND content without the rest of the work?
- ND content but with extra article interlinking added?
- ND content with IDs (such as HTML id= elements) but where IDs have been
- a public modification request to an ND content?
Unfortunately, NC is extremely popular amongst academics, presumably due to professors hopes that one day their notes may become a book which will sell for money, or maybe simply for idealist reasons, and it would be too hard to fight against such licenses at this point in time.
Ultimately the project will have to decide if such licenses is worth the trouble or not, and if one day it seems apparent that it is not, a mass take down may happen. But for now we are willing to try. Wikimedia Commons for example has decided not to allow NC and ND.
Content that is not freely licensed might be allowed for upload under a fair use rationale. Fair use are murky waters. Wikipedia for example takes a very strict approach of very limited fair use: en.wikipedia.org/wiki/Wikipedia:Non-free_content, but we are more relaxed to it, and only take gray cases down upon copyright owner request.
Some examples of what should generally be OK:
- quote up to a paragraph from a copyrighted book, clearly attributing it
- explain what you've learned from a book or course in your own words.You also have to take some care to not copy the exact structure of the original, as that itself could be subject to copyright.One good approach is to just use several sources. If multiple sources use the same structure, then it is more arguable that this structure is not a novel copyrighted thing.
- use a copyrighted image when there is no free alternative to illustrate what you are talking about
If the copyright owner complains in such cases, we might have to take something down, but as long as you are not just uploading a bunch of obviously copyrighted content, it's not the end of the world, we'll just find another freer way to explain things without them.
More egregious cases such as the upload of:and so on will obviously be taken down preemptively as soon as noticed even without a take down request.
- entire copyrighted books
- copyrighted pieces of music
Anything you want, as long as it is legal. This notably includes not violating copyright, see also: OurBigBook.com content license.
At some distant point in the future we could start letting people self tag content that is illegal in certain countries or for certain age groups, and we could then block this content to satisfy the laws of each country.
Websites such as Wikipedia or Stack Exchange have a political system where users can gain priviledges, and once they have gained those priviledges, they can edit or delete your content.
In OurBigBook Web, unless you explicitly give other users permission to do so, only admins of the website can ever delete any content, and that will only ever be done if:
- the content is illegal, see also What content can I publish on OurBigBook.com?
- you are trying to hack us!
Admins will always be a small number of people, either employed by, or highly trusted by OurBigBook Project leaders. They are not community elected. Their actions may be reversed at anytime by the OurBigBook Project leadership.
We haven't implemented it yet, but it is an important feature that we will implement: you will be able to download all your content as a .zip file containing OurBigBook Markup files, and then you will be able to generate the HTML for your content on your own computer with the open source OurB implementation. There are then several alternative ways to host the generated HTML files, including free ones such as GitHub Pages.
OurBigBook Web is a regular databased backed dynamic website. This is unlike the static websites generated by OurBigBook CLI:
- static websites are simpler and cheaper to run, but they are harder to setup for non-programmers
- static websites cannot have multiuser features such as likes, comments, and "view versions of this article by other users", which is are core functionality of the OurBigBook Project
The source for OurBigBook Web, source code is fully contained under the
web/
directory of the OurBigBook Project source code. OurBigBook Web can be seen as a separate Node.js package which uses the OurBigBook Library as a dependency.OurBigBook Web was originally forked from the following starter boilerplate: github.com/cirosantilli/node-express-sequelize-realworld-example-app. We are trying to keep tech synced as much as possible between both projects, since the boilerplate is useful as a tech demo to quickly try out new technologies in a more minimal setup, but it has started to lag a bit behind. The web stack of OurBigBook Web is described at: OurBigBook Web tech stack.
It is highly recommended that you use the exact same Node.js and NPM versions as given under package.json
engines.js
entry. The best way to do that is likely to use NVM as explained at: stackoverflow.com/questions/16898001/how-to-install-a-specific-version-of-node-on-ubuntu/47376491#47376491 Using NVM also removes the need for sudo
from global install commands such as npm run link
.First time setup:
where:We also provide a shortcut for that setup as:
cd ourbigbook &&
npm run link &&
npm run build-assets &&
cd web/ &&
npm install &&
./bin/generate-demo-data.js --users 2 --articles-per-user 10
# Or short version:
#./bin/generate-demo-data.js -u 2 -a 10
npm run build-assets
needs to be re-run if any assets (e.g. CSS or Js file mentioned at overview of files in this repository) on the./ourbigbook/
toplevel are modified. No need to re-run it for changes underweb/
.To develop files from outsideweb/
, also consider:as mentioned at:npm run webpack-dev
_obb
directory.- web/bin/generate-demo-data.js also creates the database and is not optional. If you want to start with an empty database instead of the demo one, you can run instead web/bin/sync-db.js:
./bin/sync-db
npm run web-setup
./bin/generate-demo-data.js --users 2 --articles-per-user 10
After this initial setup, run the development server:
npm run dev
And the website is now running at localhost:3000. If you created the demo data, you can login with:
- email:
user0@mail.com
,user1@mail.com
, etc. - password:
asdf
Custom demo user passwords can be set by exporting theOURBIGBOOK_DEMO_USER_PASSWORD
variable, e.g.:this is useful for production.OURBIGBOOK_DEMO_USER_PASSWORD=qwer ./bin/generate-demo-data.js -u 2 -a 10
To run on a different port use:
PORT=3001 npm run dev
We also offer shortcuts on toplevel for the
npm install
and npm run dev
commands so you can skip the cd web
for those:
npm install
npm run dev
Whenever you save any changes to the backend server, we listen to this and automatically restart the server, so after a few seconds or less, you can refresh the web page to obtain the backend update.
For frontend, changes are automatically recompiled by the webpack development server, so you can basically just refresh pages and they will be updated straightaway.
- OurBigBook.com user: ourbigbook.com/wikibot
- Static website render: wikibot.ourbigbook.com
- Static website source code: github.com/ourbigbook/wikibot
This bot imports the Wikipedia article category tree into OurBigBook. Only titles are currently imported, not the actual article content.
This is just an exploratory step to future exports or generative AI.
We don't have an amazing automation setup as we should, but the steps are:
- obtain
enwiki.sqlite
containing the tablespage
andcategorylinks
stackoverflow.com/questions/17432254/wikipedia-category-hierarchy-from-dumps/77313490#77313490 - Run cirosantilli.com/_raw/wikipedia/sqlite_preorder.py, potentially differently parametrized, as:
To publish as a Static website we do:
rm -rf out ./sqlite_preorder.py -D3 -d6 -Obigb -m -N enwiki.sqlite Mathematics Physics Chemistry Biology Technology cd out ls . | grep -E '\.bigb$' | xargs sed -i -r '${/^$/d}' echo '{}' > ourbigbook.json echo '*.tmp' > .gitignore ourbigbook .
and for OurBigBook Web it is important to use thegit init git add . export GIT_COMMITTER_EMAIL='bot@mail.com'; export GIT_COMMITTER_NAME='Mr. Bot'; export GIT_COMMITTER_DATE="2000-01-01T00:00:00+0000"; export GIT_AUTHOR_EMAIL="$GIT_COMMITTER_EMAIL"; export GIT_AUTHOR_NAME="$GIT_COMMITTER_NAME"; export GIT_AUTHOR_DATE="$GIT_COMMITTER_DATE"; git config --add user.email "$GIT_COMMITTER_EMAIL" git config --add user.name "$GIT_COMMITTER_NAME" git commit --author "${GIT_COMMITTER_NAME} <${GIT_COMMITTER_EMAIL}>" -m 'Autogenerated commit' )
--web-nested-set-bulk
option to speed thing up:The current limiting factor on the number of articles per user is memory of the nested set generation. We've managed to review this and reduce it with attribute selection, but we have not yet been able to indefinitely scale it, e.g. we would not be able to handle 1M articles per user. The root problems are:ourbigbook --web --web-nested-set-bulk
- lack of depth first in SQLite due to lack of arrays, as opposed to PostgreSQL: stackoverflow.com/questions/65247873/preorder-tree-traversal-using-recursive-ctes-in-sql/77276675#77276675
- lack of proper streaming in Sequelize: stackoverflow.com/questions/28787889/how-can-i-set-up-sequelize-js-to-stream-data-instead-of-a-promise-callback
Now let's look at the shape of the data. Total pages:
gives ~59M.
sqlite3 enwiki.sqlite 'select count(*) from page'
Total articles:
gives ~17M.
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 0'
Total non-redirect articles:
gives: ~6.7M
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 0 page_is_redirect = 0'
Categories:
gives: ~2.3M.
sqlite3 enwiki.sqlite 'select count(*) from page where page_namespace = 14'
Allowing for depth 6 of all of STEM:
leads to ~980k articles.
./sqlite_preorder.py -D3 -d6 -Obigb -m -N enwiki.sqlite Mathematics Physics Chemistry Biology Technology
Depth 6 on Mathematics only:
leads to 150k articles. Some useless hogs and how they were reached:
./sqlite_preorder.py -D3 -d6 -Obigb -m -N enwiki.sqlite Mathematics
- Actuarial science via Applied Mathematis: 4k
- Molecular biology via Applied geometry: 4k
- Ship identification numbers via Numbers: 5k
- Galaxies via Dynamical systems: 7k
- Video game gameplay via Game design via Game theory: 17k
Depth 6 on Mathematics + Physics:
leads to 104k articles.
./sqlite_preorder.py -D3 -d5 -Obigb -m -N enwiki.sqlite Mathematics Physics
Allowing for unlimited depth on Mathematics:
leads seems to reach all ~9M articles + categories , or most of them. We gave up around 8.6M, when things got really really slow, possibly due to heavy duplicate removal. We didn't log it properly, but depths of 3k+ were seen... so not setting depth is just pointless unless you want the entire Wiki.
./sqlite_preorder.py -D3 -Obigb -m -N enwiki.sqlite Mathematics
You can generate demo data for OurBigBook Web with web/bin/generate-demo-data.js, e.g.:
cd web
./bin/generate-demo-data --users 2 --articles-per-user 10
Every time this is run, it tries to update existing entities such as users and articles first, and only creates them if they don't exist. This allows us to update all demo data on a live website that also has users without deleting any user data.
Note however that if you ever increase the ammount of demo users, you might overwrite real user data. E.g. if you first do:
and then some time later:
it is possible that some real user will have taken up the username that we use for the third user, which did not exist previously, and then hacks their articles away. So never ever do that! Just stick to the default values in production.
./bin/generate-demo-data --users 2 --articles-per-user 10
./bin/generate-demo-data --users 4 --articles-per-user 10
As a safeguard, to be able to run this in production you have to also pass the
--force-production
flag;
./bin/generate-demo-data --users 2 --articles-per-user 10 --force-production
To first fully clear the database, including any real user data, before doing anything else, use
--clear
, e.g.:
./bin/generate-demo-data --users 4 --articles-per-user 10 --clear
To clear the database and start with an empty database use
--empty
:
./bin/generate-demo-data --empty
To regenerate the PostgreSQL database instead of SQLite as mentioned at local development run with PostgreSQL:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
By default, when you run web/bin/generate-demo-data.js, besides inserting the data into the database directly, the command also generates a in-file-system tree that contains equivalent content under:
web/tmp/<username>/<id>.bigb
Sample paths to some files could be:
web/tmp/demo/barack-obama/ourbigbook.json
web/tmp/demo/barack-obama/test-child-1.bigb
web/tmp/demo/barack-obama/test-scope/test-scope-1.bigb
Because each user has its own
ourbigbook.json
file added to the directory, you can for example build each user directory in isolation with:
cd web/tmp/demo/barack-obama
ourbigbook .
This setup can be useful for quickly testing things locally, and in particular to test
-W
, --web
upload to a local test server.These files have nothing to do with OurBigBook Web specifically, and would be used from OurBigBook CLI itself. It would be nice to bring them up to OurBigBook CLI at some point, and only expose the Web-specific database functions from Web.
There are a few methods available.
One option is to use the standard Express.js logging mechanism:
Shortcut:
DEBUG='sequelize:sql:*' npm run dev
npm run devs
These logs also include some kind of timing information. However, we are not entirely sure that the timings mean, as they show for both
The meaning of
Executing
(query is about to start) and Executed
(query finished) lines with possibly different values e.g.:
sequelize:sql:pg Executing (default): SELECT 1+1 AS result +0ms
sequelize:sql:pg Executed (default): SELECT 1+1 AS result +1ms
+0ms
and +1ms
appears to be the timing since last the last message with the same ID, i.e. sequelize:sql:pg
in this case. Therefore, so long as there wasn't any sequelize:sql:pg
between and the corresponding Executing
, the Executing
timing should give us the query time.This is a bit messy however, as we often want to find the largest numbers for profiling, and there could be a large time delta during inactivity.
This tends to be a better good wayfor benchmarking than DEBUG sql:
which produces many outputs of type:
so we get explicit elapsed time measurements rather than deltas, and without the corresponding
OURBIGBOOK_LOG_DB=1 npm run dev
Executed (default): SELECT 1+1 AS result Elapsed time: 0ms
Executing
marker.Furthermore, because we try to code the server correctly by making multiple
async
requests simultaneously wherever possible, the slowest of those requests finishes, last, and is the last "Elapsed time" to get logged! So you generally just have to look at the last logged line if there's one slow bottleneck query, rather than going over all the previous "Elapsed time" entries.This method uses Sequelize's
benchamrk: true
option as per: stackoverflow.com/questions/52260934/how-to-measure-query-execution-time-in-seqilize.It might be wise to enable PostgreSQL query logging by default with:
log_statement
for development. TODO does it noticeably affect performance?One major advantage of this method is that Sequelize's error logging is a bit crap, and sometimes the error appears much much more clearly in the PostgreSQL logs.
However, you often want to long only a few selected queries, otherwise it becomes very difficult to determine which query is which, in particular due asynchronous execution. In this case, use the technique mentioned at: stackoverflow.com/questions/21427501/how-can-i-see-the-sql-generated-by-sequelize-js/21431627#21431627 and just add:
to the code in the query you want to log.
logging: console.log,
Maybe we should do a better integration: stackoverflow.com/questions/70948142/how-to-indent-logged-queries-in-sequelize this is something that we do a lot:
npm install -g sql-formatter
xsel -b | sql-formatter -l postgresql
First run the first time setup from local development server.
Then, when running for the first time, or whenever frontend changes are made, you need to create optimized frontend assets with:
before you finally start the server each time with:
npm run build-dev
npm start
This setup runs the Next.js server in production mode locally. Running this setup locally might help debug some front-end deployment issues.
Building like this notably runs full typescript type checking, which is a good way to find bugs early.
But otherwise you will just normally use the local run as identical to deployment as possible setup instead for development, as that makes iterations quicker are you don't have to re-run the slow
npm run build-dev
command after every frontend change.build-dev
is needed instead of build
because it uses NODE_ENV_OVERRIDE
which is needed because Next.js forces NODE_ENV=production
and wontfixed changing it: github.com/vercel/next.js/issues/4022#issuecomment-374010365, and that would lead to the PostgreSQL database being used, instead of the SQLite one we want.build
runs npm run build-assets
on toplevel which repacks ourbigbook itself and is a bit slow. To speed things up during the development loop, you can also use:
npm run build-dev-nodeps
web/
.TypeScript type checking an also be run in isolation as mentioned at Section 7.2.9.1. "OurBigBook Web TypeScript type checking" with:
npm run tsc
PostgreSQL is the database that we use on production, and sometimes is is necessary to test stuff with it locally.
There are two main types of run with PostgreSQL:
- Local run as identical to deployment as possible: uses PostgreSQL, but also sets as much as possible to match production, including Next.js rendering stuff
- Local development run with PostgreSQL: uses PostgreSQL database, but keeps everything else in development mode
To interactively inspect the local development database use our helper at web/bin/psql:
Commands can be run as usual:
It uses
web/bin/psql
web/bin/psql -c 'SELECT * FROM "Article";'
PGPASSWORD
is mentioned at: stackoverflow.com/questions/6405127/how-do-i-specify-a-password-to-psql-non-interactivelyBefore running OurBigBook Web, the PostgreSQL database should be setup with web/bin/pg-setup:
This command:
web/bin/pg-setup
- drops the existing database if any, i.e. nukes all data
- creates a test user
- re-creates the test database
Here we use PostgreSQL instead of SQLite with the prebuilt static frontend.
For when you really need to debug some deployment stuff locally.
Before the first run, do the OurBigBook Web PostgreSQL setup.
Then, after every modification
and then visit the running website at: localhost:3000/
npm run build-prod
npm run start-prod
To optionally nuke the database and create the demo data:
or alternatively to start from a clean database:
npm run seed-prod
psql -c "DROP DATABASE ourbigbook"
createdb ourbigbook
psql -c 'GRANT ALL PRIVILEGES ON DATABASE ourbigbook TO ourbigbook_user'
You can inspect the database interactively with:
and then running SQL commands.
psql ourbigbook
If you have determined that a bug is PostgreSQL specific, and it is easier to debug it interactively, first create the database as mentioned at local run as identical to deployment as possible and then:
or shortcut for the run:
OURBIGBOOK_POSTGRES=1 ./bin/generate-demo-data.js
OURBIGBOOK_POSTGRES=1 npm run dev
npm run dev-pg
Note that doing
because we have to shell out to the ugly migration CLI, and that only understands
sync-db
also requires NODE_ENV=production
as in:
NODE_ENV=production OURBIGBOOK_POSTGRES=1 bin/sync-db.js
NODE_ENV
.Setup the database:
web/bin/pg-setup ourbigbook2
OURBIGBOOK_DB_NAME=ourbigbook2 web/bin/pg web/bin/generate-demo-data.js
Run the server:
OURBIGBOOK_DB_NAME=ourbigbook2 npm run dev-pg
Or commonly to run on a different port so that two instances may be accessed separately:
PORT=3001 OURBIGBOOK_DB_NAME=ourbigbook2 npm run dev-pg
To restore a dump to the secondary database:
web/bin/pg_restore -d ourbigbook2 latest.dump
Kill all queries that are a currently running on PostgreSQL database.
Useful in the sad cases that our recursive queries go infinite due to bugs.
web/bin/pg-kill-queries
#!/usr/bin/env bash
script_dir="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
# https://www.sqlprostudio.com/blog/8-killing-cancelling-a-long-running-postgres-query
"$script_dir/psql" -c "SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' and pid <> pg_backend_pid();" "$@"
Save the psql database state as per stackoverflow.com/questions/37984733/postgresql-database-export-to-sql-file with our web/bin/pg_dump helper:
Then to restore it later with web/bin/pg_restore:
stackoverflow.com/questions/2732474/restore-a-postgres-backup-file-using-the-command-line
web/bin/pg_dump tmp.dump
web/bin/pg_restore tmp.dump
Helper that gives
psql
PostgreSQL shell on the default database (ourbigbook
).To select another database use the
-d
option: E.g. to use the ourbigbook_test
database from OurBigBook Web run unit tests in PostgreSQL:
bin/psql -d ourbigbook_test
psql
just forwards everything to the underlying psql
command, so you can e.g. run a SQL script stored in a file with:
bin/psql <tmp.sql
bin/psql -c 'select * from "Id"'
web/bin/psql
#!/usr/bin/env bash
db=ourbigbook
args=()
while [ $# -gt 0 ]; do
case "$1" in
-d)
db="$2"
shift 2
;;
*)
args+=("$1")
shift
;;
esac
done
PGPASSWORD=a psql -U ourbigbook_user -h localhost "$db" "${args[@]}"
List all queries that are a currently running on PostgreSQL database.
Useful in the sad cases that our recursive queries go infinite due to bugs.
web/bin/pg-ls-queries
#!/usr/bin/env bash
script_dir="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
# https://stackoverflow.com/questions/12641676/how-to-get-a-status-of-a-running-query-in-postgresql-database/44211767#44211767
"$script_dir/psql" -c "SELECT datname, pid, state, query, age(clock_timestamp(), query_start) AS age
FROM pg_stat_activity
WHERE state <> 'idle' AND state <> 'idle in transaction'
AND query NOT LIKE '% FROM pg_stat_activity %'
ORDER BY age" "$@"
When developing the backend only, Next.js adds several seconds to the debug loop. This is a life saver in that case:
npm run dev-pg-back
Put a
where
debugger
statement where you want to break and run:
npm run devi
i
stands for inspect
as in node inspect
.This pauses at the start of execution. So just run
c
and normal execution resumes until the debugger;
statement is reached.All our tests are all located inside test.js.
They can be run with:
cd web
npm test
The dynamic website tests also uses Mocha just like the tests for OurBigBook CLI and OurBigBook Library, so similar usage patterns apply, e.g. to run just a single test:
or to show database queries being done in the tests:
npm test -- -g 'substring of test title'
DEBUG='*:sql:*' npm test
The tests include two broad classes of tests:
- API tests: launch the server on a random port, and run API commands, thus testing the entire backend
- smaller unit tests that only call certain functions directly
- TODO: create frontend tests: github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app/issues/11
To run the tests on PostgreSQL instead of the default SQLite, first setup the test database analogously to local run as identical to deployment as possible:
and then run with:
Run only matching tests on PostgreSQL:
cd web
bin/pg-setup ourbigbook_test
npm run test-pg
npm run test-pg -- -g 'substring of test title'
Running tests erases all data present in the database used. In order to point to a custom database use:
We don't use
DATABASE_URL_TEST=postgres://realworld_next_user:a@localhost:5432/realworld_next_test npm run test-pg
DATABASE_URL
when running tests as a safeguard to reduce the likelihood of accidentally nuking the production database.The test database contains the state of the latest test run at the end of the run. You can inspect it with web/bin/psql with:
bin/psql -d ourbigbook_test
By default, we don't make any requests to Next.js, because starting up Next.js is extremelly slow for regular test usage and would drive us crazy.
In regular OurBigBook Web usage through a browser, Next.js handles all GET requests for us, and the API only handles the other modifying methods like POST.
However, we are trying to keep the API working equally well for GET, and as factored out with Next.js as possible, so just testing the API GET already gives reasonable coverage.
But testing Next.js requests before deployment is a must, and is already done by default by
or e.g. to run just a single test:
for for Postgres:
These tests are currently very basic, and only check page status. In the future, we can
npm run deploy-prod
from Heroku deployment, and can be done manually with:
npm run test-next
npm run test-next -- -g 'api: create an article and see it on global feed'
npm run test-pg-next
- add some HTML parsing to check for page contents as a reponse to GET, just as we already do in the test system of the OurBigBook Library
- go all in an use a JavaScript enabled test system like Selenium to also test login and data modification from the browser
If you are not making any changes to the website itself, e.g. only to the test system, then you can skip the slow rebuild with:
Note that annoyingly, Next.js reuses the same forlder for dev and build runs, so you have to quit your dev server for this to work, otherwise the dev server just keeps writing into the folder and messing up the production build test.
test-next-nobuild
test-pg-next-nobuild
Note that Next.js tests are just present inside other tests, e.g.
api: create an article and see it on global feed
also tests some stuff when not testing Next.js. Running npm run test-next
simply enables the Next.js tests on top of the non Next.js ones that get run by default.These tests can only be run in production mode, and so our scripts automatically rebuild every time before running the tests, which makes things quite slow. This required because in development mode, Next.js is extremelly soft, and e.g. does not raise 500 instead returning a 200 page with error messages. Bad default.
TypeScript type checking of OurBigBook Web is run automatically during build, e.g. by:
as mentioned at local optimized frontend.
npm run build-dev
To speed up the development loop further, you can run just the TypeScript type checking with:
The output format is also a bit nicer that what is shown in
cd web
nmp run typecheck
npm run build-dev
.We use Next.js' rules, which are extremelly useful for finding React hook issues:
cd web
npm run lint
To lint just one file run:
cd web
npx eslint front/Article.tsx
Each user has an
admin
property which when set to true
allows the user to basically view and change anything for themselves and other users. E.g. admins can see private data of any user such as emails, or modify users usernames.Some actions are not possible currently because they were originally hardcoded for "do action for the current user" rather than "do action for target user", but all of those are intended to be converted. E.g. that is currently the case for like/unlike, follow/unfollow from the API.
In order to mark a user as admin, direct DB acceess is required.
For example, to make user
Admin priviledges can be revoked with the
barack-obama
admin on a development run the web/bin/make-admin script:
web/bin/make-admin barack-obama
-f
(--false
) flag:
web/bin/make-admin -f barack-obama
The same command works in a Heroku deployment where you can run:
heroku run -a ourbigbook web/bin/make-admin -f barack-obama
We currently have some intentional denormalization in our database e.g.:
- counts such as: user reputation, article issue and follower counts, issue comment and follower counts
- nested sets
These dernormalizations are not ideal, but they make things a bit easier, and some of them are almost certainly faster.
To keep things slightly saner, the web/bin/normalize script can be used to view, check and update dernormalized data.
The "nested set index" is an index explicitly maintained by our codebase that allows quickly fetching pages for OurBigBook Web dynamic article tree in pre-order depth first, i.e. the conventional order in which the table of contents and articlees appear in a book. See also: stackoverflow.com/questions/4048151/what-are-the-options-for-storing-hierarchical-data-in-a-relational-database
This technique is also called "closure table" by some authors.
This index is, as the name indicates, an index, i.e. it duplicates information otherwise present in the OurBigBook Web
Ref
database table, which contains an adjacency list format instead, in the hope that it would be faster to pre-order depth first traverse.This feature adds considerable complexity to the codebase. Also, updates can be considerably slow, as updating this index for a single article requires updating the index value for most or all other articles as well. We should bechmark it better vs recursive queries.
This index was partly introduced as a helper rather than as a pure speed up, as it is a bit hard to do pre order tree traversal in SQLite due to the lack of arrays. In PostgreSQL we can do it well: stackoverflow.com/questions/65247873/preorder-tree-traversal-using-recursive-ctes-in-sql/77276675#77276675
Any pending migrations are done automatically during deployment as part of
npm run build
, more precisely they are run from web/bin/sync-db.js.We also have a custom setup where, if the database is not initialized, we first:This is something that should be merged into Sequelize itself, or at least asked on Stack Overflow, but lazy now.
- just creates the database from the latest model descriptions
- manually fill in the
SequelizeMeta
migration tracking table with all available migrations to tell Sequelize that all migrations have been done up to this point
In order to test migrations locally interactively, you can:
- commit them on Git
git checkout HEAD~
- reset the database with demo data:
cd web ./bin/generate-demo-data.js --clear
- Move back to master:
git checkout -
- Run the migration:
./bin/sync-db.js
Since sequelize migrations are so hard to get right, it is fundamental to test them.
One way to do it is with our web/bin/test-migration script:
For PostgresQL:
Note that Sequelize SQLite migrations are basically worthless and often incorrectly fail due to foreign key constraints: stackoverflow.com/questions/62667269/sequelize-js-how-do-we-change-column-type-in-migration/70486686#70486686 so you might not care much about making them pass and focus only PostgreSQL.
cd web
bin/test-migration -u1 -a3
bin/pg bin/test-migration -u1 -a3
The
test-migration
script:- does a git checkout out to the previous commit
- regenerates the database
- checks out to master
- and then does the migration
The arguments of
test-migration
are fowarded to web/bin/generate-demo-data.js from demo data, -u1 -a5
would produce a small ammount of data, suitable for quick iteration tests.Towards the end of that script, we can see lines of type:
Those are important diffs you might want to look at every time:Therefore, you really want the
+ diff -u tmp.old.sqlite3.sort.sql tmp.new-clean.sqlite3.sort.sql
+ diff -u tmp.new-clean.sqlite3.sort.sql tmp.new-migration.sqlite3.sort.sql
tmp.old.sqlite3.sort.sql
: old schema before migration, but with lines sorted alphabeticallytmp.new-clean.sqlite3.sort.sql
: new schema achieved by dropping the database and re creating at oncetmp.new-migration.sqlite3.sort.sql
: new schema achieved migrating from the old state
diff tmp.old.sqlite3.schema tmp.new-clean.sqlite3.schema
to be empty. For sqlite3 we actually check that and give an error if they differ, but for PostgreSQL it is a bit harder due to the multiline statements, so just inspect the diffs manually.When quickly developing before we had any users, a reasonable way is to nuke the database everytime instead of spending time writing migrations. To do this, you can without creating a migration:
This breaks the website, because the DB is out of sync. So then you go and manually fix it up:
npm run deploy-prod
# heroku run -a ourbigbook web/bin/generate-demo-data.js --force-production --clear
Some hacks for those that have DB access.
Change dates of all articles by a given user to a specific date:
select "Article"."updatedAt" from "Article" inner join "File" on "Article"."fileId" = "File".id inner join "User" on "File"."authorId" = "User"."id" and "User".username = 'barack-obama';
OurBigBook is currently hardcoded to send emails with Sendgrid. That provider was very easy to get started with, and has a free plan suitable for testing. Setup is described at: OurBigBook Web email sending with Sendgrid. Patches supporting other providers in a configurable way are welcome.
In development mode, emails are all logged to the server stdout and not actually sent, unless you run as:
This can be used to test the email integration locally.
OURBIGBOOK_SEND_EMAIL=1 npm run dev
Some research of different methods is shown at: cirosantilli.com/send-free-emails-from-heroku
Related configurations:
- after reports that useres had received a "Suspicions link" when clicking the signup link on Gmail, we've tried to followand add a TXT record of:to the domain, which Google marked as verified. Let's see if helps.
google-site-verification=gctCPztssfR8A-fQ_5298gSee_DfFjBj8v9PqAxuhgU
- ensure that you have a working email address in the hosted domain such as
notification@ourbigbook.com
. E.g. on our custom domain name setup with Porkbun. We achieved this by redirectingnotification@ourbigbook.com
to your personal email initially. - create a Sendgrid account
- it would also be a good idea to setup two factor authentication
- verify your domain, e.g.
ourbigbook.com
. This means setting up threeCNAME
records given by Sendgrid on your DNS provider, e.g. Porkbun. - create a single sender. We used:
- From Name: OurBigBook.com
- From Email Address:
notification@ourbigbook.com
- Reply to:
notification@ourbigbook.com
- we disabled their "link tracking" feature, which was turned on by default. While it is fun to track clicks, it is basically useless for transactional email, and it parse HTML and replaces the links with their tracking links, making things less clear for end users. It is also harder to debug.
- integrate using web API
- create an API key, and then save it on Heroku:
Also set it locally to be able to test email sending integration locally:
heroku config:set -a ourbigbook SENDGRID_API_KEY=thekey
Then, to verify that the email sending is actually working run;echo ourbigbook SENDGRID_API_KEY=thekey >> web/.env
and try to register some of your real emails. You should actually receive the email at this step. The email appears as sent from:OURBIGBOOK_SEND_EMAIL=1 npm run dev
gmail accepted the email under promotions without domain verification, but outlook sent it to spam. Make sure to click "it is not spam" in that case.ciro@ourbigbook.com via sendgrid.net
- create an API key, and then save it on Heroku:
Go to www.google.com/recaptcha/about/, setup a new domain, and save the values given e.g. to Heroku for Heroku deployment:
heroku config:set -a ourbigbook RECAPTCHA_SECRET_KEY=secret_key
heroku config:set -a ourbigbook NEXT_PUBLIC_RECAPTCHA_SITE_KEY=site_key
Aditionally, also setup a separate localhost reCAPTCHA to test that it is working:
and then to use the .env file run with:
echo RECAPTCHA_SECRET_KEY=secret_localhost_key >> web/.env
echo NEXT_PUBLIC_RECAPTCHA_SITE_KEY=site_localhost_key >> web/.env
cd web
env $(cat .env | xargs) npm run dev
Although it is possible to use a single reCAPTCHA for both production and development, Google recommends having separate ones.
If the
NEXT_PUBLIC_RECAPTCHA_SITE_KEY
variable is not set, then reCAPTCHA is simply not used in the website.Got it running perfectly at as of April 2021 ourbigbook.com with the following steps.
Initial setup for a Heroku project called
To finish things off, you must now:
ourbigbook
:
sudo snap install --classic heroku
heroku login
heroku git:remote -a ourbigbook
git remote rename heroku prod
# Automatically sets DATABASE_URL.
heroku addons:create -a ourbigbook heroku-postgresql:hobby-dev
# We need this to be able to require("ourbigbook")
heroku config:set -a ourbigbook SECRET="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 256)"
# Password of users generated with ./web/bin/generate-demo-data
heroku config:set -a ourbigbook OURBIGBOOK_DEMO_USER_PASSWORD="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 20)"
# You can get it later to login with the demo users from the Heroku web interface
- setup OurBigBook Web email sending, e.g. with OurBigBook Web email sending with Sendgrid. We haven't found a free method integrated with Heroku currently, so using this separate Sendgrid setup initially.
- setup reCAPTCHA: OurBigBook Web reCAPTCHA setup
Additionally, you also need to setup the PostgreSQL test database for both OurBigBook CLI and OurBigBook Web as documented at Section 7.2.5.1. "OurBigBook Web PostgreSQL setup":
web/bin/pg-setup ourbigbook-cli
Then deploy with:
cd web
npm run deploy-prod
Get an interactive shell on the production server:
From there you could then for example update the demo data with:
This should in theory not affect any real user data, only the demo articles and users, so it might be safe. In theory!
./heroku run bash
cd web
bin/generate-demo-data.js --force-production
Alternatively, we could do this at once with;
./heroku run web/bin/generate-demo-data.js --force-production
Drop into a PostgreSQL shell on production:
Of course, any writes could mean loss of user data!
./heroku psql
Run a query directly from your terminal:
./heroku psql -c 'SELECT username,email FROM "User" ORDER BY "createdAt" DESC LIMIT 50'
If some spurious bugs crashes the server, you might want to restart it with:
./heroku restart
The heroku helper allows us to omit the boring
instead of:
-a ourbigbook
, e.g. we can just type:
./heroku logs -f
./heroku logs -a ourbigbook -f
heroku
#!/usr/bin/env bash
# https://docs.ourbigbook.com/file/heroku
set -eu
cmd="$1"
shift
heroku "$cmd" -a ourbigbook "$@"
The 10k rows of the free plan are easy to reach, this procedure can be used to upgrade:
The domain OurBigBook.com was leased from: porkbun.com/. Unfortunately, HTTPS on Heroku with a custom domain requires using a paying tier, so we upgraded from the free tier to the cheapest paid tier, "Hobby Project", to start with: stackoverflow.com/questions/52185560/heroku-set-ssl-certificates-on-free-plan
On the Porkbun web UI, we added a DNS record of type :
where
and we removed all other
ALIAS ourbigbook.com <heroku-id>.herokudns.com
heroku-id
was obtained from:
heroku domains:add ourbigbook.com
heroku domains
ALIAS
/CNAME
records from Porkbun.Next, we setup forwarding from
ciro@ourbigbook.com
to Ciro Santilli's personal gmail account. This is done in part because it appears that we are required to provide a from address for OurBigBook Web email sending with Sendgrid, and that email has to be verified. Having Porkbun host it costs 2$/month, and we are trying to use as much free stuff as possible until there are actual users on the website.Note that if you try to test from your own personal account, the redirect automatically skips sending as it notices that it would redirect to the sender. To test it you have to use some secondary email account instead.
Before pushing any new changes, and especially ones that seem dangerous, it is a good idea to first deploy to a staging server.
We have a staging server running at: ourbigbook-staging.herokuapp.com/
To set it up, we just follow the exact same steps as for Heroku deployment but with a different app ID. E.g. using the
ourbigbook-staging
heroku project ID:
git remote add staging https://git.heroku.com/ourbigbook-staging.git
heroku addons:create -a ourbigbook-staging --confirm ourbigbook-staging heroku-postgresql:hobby-dev
heroku config:set -a ourbigbook-staging SECRET="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 256)"
npm run deploy-staging
To copy the main database in staging we can follow the instructions at: stackoverflow.com/questions/10673630/how-do-i-transfer-production-database-to-staging-on-heroku-using-pgbackups-gett Considering a production Heroku app ID of
ourbigbook
:
heroku maintenance:on -a ourbigbook-staging &&
heroku pg:copy ourbigbook::DATABASE_URL DATABASE_URL -a ourbigbook-staging &&
heroku maintenance:off -a ourbigbook-staging
To get a shell on the stating server you can run:
heroku run -a ourbigbook-staging bash
To log database queries you can run:
./heroku config:set DEBUG='*:sql:*'
You then then see them with other logs at:
./heroku logs -t
Disable these verbose logs once you're done:
./heroku config:unset DEBUG
First download a dump of the database as per devcenter.heroku.com/articles/heroku-postgres-import-export with web/bin/pg_dump_heroku:
This produces a file
web/bin/pg_dump_heroku
latest.dump
with the database dump. If that already exists, it gets overwritten.Restoring that database locally to reproduce bugs can be done with the helper web/bin/pg_restore_heroku_local:
web/bin/pg_restore_heroku_local
Restoring that local database dump to Heroku when reverting back isuses can be done with the helper web/bin/pg_restore_heroku_remote:
This will then ask you to type an interactive confirmation which we have not disabled by default.
web/bin/pg_restore_heroku_remote
That helper restore the local
We also add some extra flags to reduce the ammount of warnings and errors due to database differences. The command does not exit with status 0. devcenter.heroku.com/articles/heroku-postgres-import-export says some of those warnings are normal and can be ignored.
latest.dump
database file like in Section 7.2.5.5.2. "Save and restore local PostgreSQL development database". First we nuke the database completely with web/bin/pg-setup to increase accuracy:
web/bin/pg-setup
web/bin/pg_restore --no-acl --no-owner latest.dump
On the toplevel we have:
.
: OurBigBook package- Every require outside of
web/
must be relative, except for executables such as ourbigbook or demos such as lib_hello.js, or else the deployment will break.This is because we don't know of a super clean way of adding the toplevelourbigbook
package to the search path asnpm run link
does not work well on Heroku.A known workaround to allownpm run build-assets
is done at: web/build.sh.
Currently, Heroku deployment does the following:
- install both
dependencies
anddevDependencies
npm run build
- remove
devDependencies
from the final output to save space and speed some things upThedevDependencies
should therefore only contain things which are needed for the build, typically asset compressors like Webpack, but not components that are required at runtime.
This setup creates some conflict between what we want for OurBigBook command line users, and Heroku deployment.
Notably, OurBigBook command line users will want SQLite, and Heroku never, and SQLite installation is quite slow.
Since we were unable to find any way to make things more flexible on the
package.json
with some kind of optional depenency, for now we are just hacking out any dependencies that we don't want Heroku to install at all from package.json and web/package.json with sed
rom heroku-prebuild.Further discussion at: github.com/ourbigbook/ourbigbook/issues/156
stackoverflow.com/questions/18215389/how-do-i-measure-request-and-response-times-at-once-using-curl is a useful one if the server is slow:
curl -o /dev/null -s -w 'Establish Connection: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n' http://localhost:3000
These are documented at nextjs.org/docs/advanced-features/measuring-performance. To enable them use
NEXT_PUBLIC_OURBIGBOOK_LOG_PERF
, the front-end-avaible version of OURBIGBOOK_LOG_PERF
:
NEXT_PUBLIC_OURBIGBOOK_LOG_PERF=1 npm run dev
If enabled, we see a few metrics printed on the browser console such as:
{
"id": "1667591558646-5510113745482",
"name": "TTFB",
"startTime": 0,
"value": 2474.199999999255,
"label": "web-vital"
}
{
"id": "1667591558646-9119282792899",
"name": "LCP",
"startTime": 4982.099,
"value": 4982.099,
"label": "web-vital"
}
If set to OurBigBook environment variable true enables somewhat verbose logs of several key performance points, notably in:
- web vitals on all pages: nextjs.org/docs/pages/building-your-application/optimizing/analytics
- conversion
A good benchmark for the critical Article page is perhaps the Wikipedia bot account which stresses the atricle tree the hardest.
We should look out for metrics such as:and test both on Heroku deployment and locally.
- First Contentful Paint (FCP)
- Time to First Byte (TTFB)
Local tests are always on optimized PostgresQL. Remote tests always on powerful home wifi, never 4G. Measuremnts are taken on browser Network tab in developer tools with cache enabled. Each URL is ran randomly a few times, which gives an idea of cache warmup effects. Logged-in is logged-in as
cirosantilli
.- 075872a0a5ca7faf171d45834bc2b47995a15634 web: speed up article page DB queries further by moving topicId into topicAt this commit we had highly optimized article page queries. The slowest query was getting the new upvotes of the logged in user at 20 - 30 ms.
- TTFB
- local
- logged off:
- /wikibot: 65 - 70
- /cirosantilli: 160 - 190
- /cirosantilli/mathematics: 100 - 120
- /barack-obama: 40 - 60
- logged in:
- /wikibot: 90 - 100
- /cirosantilli: 200 - 300
- /barack-obama 65
- logged off:
- cirosantilli.com: 100 - 200
- ourbigbook.com
- logged off:
- /wikibot: 180 - 300
- /cirosantilli: 300 - 400
- /cirosantilli/mathematics: 240 - 400
- /barack-obama: 140 -180
- logged in:
- /wikibot: 330 - 450
- /cirosantilli: 450 - 600
- /cirosantilli/mathematics: 350 - 500
- /barack-obama: 250 - 400
- logged off:
- local
- TTFB
- 8ea5ffa52d291350e9a5ddef92e4171d50a51dcc TTFB logged off:This suggests a database scaling issue like a missing index.
- local
- /wikibot: 1200
- /cirosantilli: 500
- /barack-obama: 400
- cirosantilli.com: 100 - 200
- ourbigbook.com
- /wikibot: 3500 - 4500
- /cirosantilli: 1100 - 1500
- /barack-obama: 500 - 700
- local
For a general introduction to CSRF see: security.stackexchange.com/questions/8264/why-is-the-same-origin-policy-so-important/72569#72569
CSRF security is organized as follows:
- unsafe methods such as POST are all authenticated by JWT. This authentication comes from headers that can only be sent via JavaScript, so it is not possible to make users click links that will take those actions
- safe methods such as GET are authenticated by a cookie. The cookie has the same value as the JWT. It is possible for third party websites to make such authenticated requests, but it doesn't matter as they will not alter the server state, and contents cannot be read back due to the single origin policy.There is currently one exception to this: the verification page, which has side effects based on GET. But it shouldn't matter in that specific case.
The JWT token is only given to users after account verification. Having the JWT token is the definition of being logged in.
- Frontend
- Next.js
- React
- Frontend backend communication
- JSON API
- Next.js makes its prerender server-side queries directly to the database without going through the API
- Backend
- Express.js
- Sequelize
- SQLite for local development, PostgreSQL for deployment
- nav tabs
- are icon-separated, e.g.: "(home icon) Home (article icon) Top Articles (article icon) Latest articles'
- every title-like (e.g. pages, table headers) thing and links to title-like things are "Sentence cased", i.e.:
- the first letter uppercase
- others are lowercase or uppercase if proper nouns
- things that users can click to "take actions" (usually modify the database) show as buttons. Things that users can click to view things show as links. Exmaples of actions:
- like, subscribe
- create article/issue/comment
- go to a separate new/edit article/issue page. This is strictly technically speaking just a link, but it is closely related to creating something new, so it feels more intuitive for it to be a button
- when logged off:
- stateful actions like "create article" or "like article" show as if logged in, but redirected to signup page. Unless it is possible for user to create significant content and then lose it, e.g. type in a new comment body and only notice later that he cannot submit.
It is intended that OurBigBook Web be readable with JavaScript disabled. This has the following advantages:
- reduces flickering on page load for users that JavaScript enabled
- may help with SEO
- helps with Web archiving. The Wayback Machine for example is notably bad with JavaScrip
- helps privacy freaks who have their JavaScript turned off
Pages should look exactly the same with JavaScript turned on or off.
Page interactive behaviour may differ slighly. Notably, due to OurBigBook Web dynamic article tree, clicking links with JavaScript off always opens a new page
/username/myid
rather than going to #myid
if the target Element ID is already visible in the current page.User input and even login is not intended to be necessarily possible however, and will likely be always broken.
This section describes rules for normally browser-visible URLs of the website. These rules do not apply to the Web API, see OurBigBook Web API standards for Web API URL standards.
It should be impossible to have upper case characters on any URL of the website. Words should be separated by hyphens
-
instead.Use the usual gramatical ordering for action object pairs, e.g.:instead of:The latter is tempting to group all "Discussion" actions under a prefix, but let's use the nice grammar instead.
new-discussion
edit-discussion
discussion-new
discussion-edit
GET parameters should always be alphabetically ordered by key, e.g.:
rather than:
?ab=1&cd=2
?cd=2&ab=1
Next.js imposes one constraint: ISR only works with URL parameters like
/articles/<page>
, not GET parameters like /articles?page=1
.As of writing however, we don't use any ISR as it adds a lot of complication. But still, we are trying to stick to the general principle that if something might ever be ISR'ed in the future, then we would like to keep it as parameter rather then GET. It feels sane.
The only things that we are ever consider ISR'ing are the pre-rendered version of articles and issues, excluding any metadata of those that changes often or depends on logged in users.
All lists of things will never be ISR'ed, as those can change constantly. One conclusion of this is that:which appear only in lists of things, will always be part of the GET query, and not params.
- page number
- ordering
- other search-like parameters
Types:
- booleans are
true
orfalse
It is a bit annoying that due to scopes being separated with
rather than the current:
but this produces ambiguity, what if user
/
, we always have to put article names last in any URL (outside GET parameters) to avoid ambiguities. E.g. it would be arguably nicer to have:
/go/donald-trump/linear-algebra/issues
/go/issues/donald-trump/linear-algebra
issues
has an article with title Linear algebra
under scope donald-trump
?In the API article slugs are always passed as a GET parameter, unlike in the case of browser visible URLs. This is because we don't care about having:so the
- nice human readable URLs
- ISR, at least for now
id=
parameter is always used.For now there is no API that returns single items: getting a single item is done simply using a filter that uniquely selects a single element, e.g.:
Maybe this will change if someday we ever de to have full vs minimized versions of API objects. But then at that point we might as well go to GraphQL.
/api/articles?id=johnsmith/mathematics
- web/app.js: server executable entry point, can also be used programmatically from Node.js
- web/models: Sequelize database models. Most changes in this folder require the creation of a corresponding web/migrations
- web/migrations: database migrations, see also: Section 7.2.11.2.4. "OurBigBook Web database migration setup"
- web/pages: Next.js URL entry points
- web/front and web/front.tsx: files that can be imported from either front-end of backend. See: stackoverflow.com/questions/64926174/module-not-found-cant-resolve-fs-in-next-js-application/70363153#70363153
- web/back and web/back.ts: files that can be imported only from backend. See: stackoverflow.com/questions/64926174/module-not-found-cant-resolve-fs-in-next-js-application/70363153#70363153
- web_api.js: helpers to access the OurBigBook HTTP REST API. These have to be outside of
web/
because OurBigBook CLI uses them e.g. for syncing local files to the server, and OurBigBook CLI cannot depend on OurBigBook Web components, only the other way around, otherwise we could create circular dependencies. That exact same JavaScript code is also used from the front-end! The infinite joys of homomorphic JS.
This setion is about OurBigBook Web administration utilities that live under the directory web/bin.
These can be mostly used to conveniently manipulate the database to perform some routine adminstartive tasks from the command line.
Check the counts of issues per article for user
barack-obama
only with -c
, but don't fix anything:
web/bin/normalize -c -u barack-obama article-issue-count
Print the full correct normalized state with
-p
:
web/bin/normalize -f -u barack-obama issue-follower-count
Fix the counts of issue follower if any are wrong with
-f
, thus potentially altering the database:
web/bin/normalize -f -u barack-obama issue-follower-count
web/bin/normalize
#!/usr/bin/env node
// https://docs.ourbigbook.com/file/web/bin/normalize
const path = require('path')
const commander = require('commander')
const models = require('../models')
// main
const program = commander.program
program.description('View, check or update (i.e. normalize redundant database data: https://docs.ourbigbook.com/ourbigbook-web-dynamic-article-tree https://docs.ourbigbook.com/_file/web/bin/normalize')
program.option('-c, --check', 'check if something is up-to-date', false);
program.option('-f, --fix', 'fix before printing', false);
program.option('-p, --print', 'print the final state after any update if any', false);
program.option(
'-u, --username <username>',
'which user to check or fix for. If not given do it for all users. Can be given multiple times.',
(value, previous) => previous.concat([value]),
[],
);
program.parse(process.argv);
const opts = program.opts()
const whats = program.args
const sequelize = models.getSequelize(path.dirname(__dirname))
;(async () => {
await models.normalize({
check: opts.check,
fix: opts.fix,
log: true,
print: opts.print,
sequelize,
usernames: opts.username,
whats,
})
})().finally(() => { return sequelize.close() });
Rerender all articles by all users:
web/bin/rerender-articles.js
Rerender only the articles with specified slugs:
web/bin/rerender-articles.js johnsmith/mathematics maryjane/physics
Only rerender articles by
johnsmith
and maryjane
:
web/bin/rerender-articles.js -a johnsmith -a maryjane
Rerender articles by all authors except
johnsmith
and maryjane
:
web/bin/rerender-articles.js -A johnsmith -A maryjane
Rerendering has to be done to see updates on OurBigBook changes that change the render output.
Notably, this would be mandatory in case of CSS changes that require corresponding HTML changes.
As the website grows, we will likely need to do a lazy version of this that marks pages as outdated, and then renders on the fly, plus a background thread that always updates outdated pages.
The functionality of this script should be called from a migration whenever such HTML changes are required. TODO link to an example. We had one at
web/migrations/20220321000000-output-update-ancestor.js
that seemed to work, but lost it. It was simple though. Just you have to instantiate your own Sequelize instance after making the model change to move any data.web/bin/rerender-articles.js
#!/usr/bin/env node
const path = require('path')
const commander = require('commander');
const models = require('../models')
const back_js = require('../back/js')
const program = commander.program
program.description('Re-render articles https://docs.ourbigbook.com/_file/web/bin/rerender-articles.js')
program.option('-a, --author <username>', 'only convert articles by this author', (v, p) => p.concat([v]), [])
program.option('-A, --skip-author <username>', "don't convert articles by this author", (v, p) => p.concat([v]), [])
program.option('-i, --ignore-errors', 'ignore errors', false);
program.parse(process.argv);
const opts = program.opts()
const slugs = program.args
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
await sequelize.models.Article.rerender({
log: true,
convertOptionsExtra: { katex_macros: back_js.preloadKatex() },
authors: opts.author,
ignoreErrors: opts.ignoreErrors,
slugs,
skipAuthors: opts.skipAuthor,
})
})().finally(() => { return sequelize.close() });
Analogous to web/bin/rerender-articles.js but for issues.
web/bin/rerender-issues.js
#!/usr/bin/env node
const path = require('path')
const commander = require('commander');
const models = require('../models')
const back_js = require('../back/js')
const program = commander.program
program.description('Re-render issues https://docs.ourbigbook.com/_file/web/bin/rerender-issues.js')
program.option('-i, --ignore-errors', 'ignore errors', false);
program.parse(process.argv);
const opts = program.opts()
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
await sequelize.models.Issue.rerender({
log: true,
convertOptionsExtra: { katex_macros: back_js.preloadKatex() },
ignoreErrors: opts.ignoreErrors
})
})().finally(() => { return sequelize.close() });
Analogous to web/bin/rerender-articles.js but for comments.
web/bin/rerender-comments.js
#!/usr/bin/env node
const path = require('path')
const commander = require('commander');
const models = require('../models')
const back_js = require('../back/js')
const program = commander.program
program.description('Re-render comments https://docs.ourbigbook.com/_file/web/bin/rerender-comments.js')
program.option('-i, --ignore-errors', 'ignore errors', false);
program.parse(process.argv);
const opts = program.opts()
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
await sequelize.models.Comment.rerender({
log: true,
convertOptionsExtra: { katex_macros: back_js.preloadKatex() },
ignoreErrors: opts.ignoreErrors
})
})().finally(() => { return sequelize.close() });
Change password for a given user. Usage:
Note that this can also be achieved on the web interface by visiting the settings page of a target user with an OurBigBook Admin account.
set-password <username> <new-password>
web/bin/set-password
#!/usr/bin/env node
// https://docs.ourbigbook.com/web/bin/set-password
const path = require('path')
const commander = require('commander')
const models = require('../models')
// CLI arguments
const program = commander.program
program.allowExcessArguments(false)
program.argument('<username>', 'username')
program.argument('<password>', 'password')
program.parse(process.argv);
const opts = program.opts()
const [username, password] = program.processedArgs
// main
const sequelize = models.getSequelize(path.dirname(__dirname));
(async () => {
const user = await sequelize.models.User.findOne({ where: { username }})
await sequelize.models.User.setPassword(user, password)
await user.save()
})().finally(() => { return sequelize.close() });
This section describes features present in ourbigbook_runtime.js
That file contains JavaScript functionality to be included in the final documents to enable interactive document features such as the table of contents.
You should use the packaged
_obb/ourbigbook_runtime.js
instead of this file directly however.When you have a document like:
animal.bigb
= Animal
== Dog
=== Poodle
the version without
-S
, --split-headers
will contains a valid ID within it:animal.html#poodle
However, if at some point you decide that the section
dog
has become too large and want to split it as:= Animal
\Include[dog]
and:
dog.bigb
= Dog
== Poodle
When you do this, it would break liks that users might have shared to
animal.html#poodle
, which is not located at dog.html#poodle
.To make that less worse, if
-S
, --split-headers
are enabled, we check at runtime if the ID poodle
is present in the output, and if it is not, we redirect to the split page #poodle
to poodle.html
.It would be even more awesome if we were able to redirect to the non-split version as well,
dog.html#poodle
, but that would be harder to implement, so not doing it for now.Unlike all languages which rely on ad-hoc tooling, we will support every single tool that is required and feasible to be in this repository in this repository, in a centralized manner.
The only thing we have for now is the quick and dirty adoc-to-bigb.
The better approach would be to implement a converter in Haskell from anything to OurBigBook.
And from OurBigBook to anything, create new output formats inside OurBigBook to those other formats.
VS Code is currently intended to be the best supported non-fully-custom OurBigBook editor.
The official OurBigBook extension is published at: marketplace.visualstudio.com/items?itemName=ourbigbook.ourbigbook-vscode by the publisher account: marketplace.visualstudio.com/publishers/ourbigbook.
First follow play with the template and ensure that you are able to run
We would like to remove the need for this step and allow users doing everyting without the command line, but that will require some extra work: github.com/ourbigbook/ourbigbook/issues/318
ourbigbook
from the command line succesfully on a project template:
npx ourbigbook .
Once that is working, you can now install the extension either:
- via the VS Code UI: Ctrl + Shift + X and search for "ourbigbook", the ID is:
ourbigbook.ourbigbook-vscode
- from the command line with:
ext install ourbigbook.ourbigbook-vscode
We also recommend installing the "Code Spell checker" extension:
and adding the following settings to your User JSON settings file:
ext install streetsidesoftware.code-spell-checker
"cSpell.enableFiletypes": [
"ourbigbook"
],
Next, open the downloaded folder in Visual Studio Code with:then open a .bigb file such as README.bigb on vscode.
Ctrl + Shift + P
- File: Open Folder
Now you are ready to:
Ctrl + Shift + B
: build all files in the folderF5
: build all files in the folder, and view the HTML output for the current source file in your browser
Other things to try include:
Ctrl + T
: search for a header in any file- type
<
to create an internal cross file reference and observe autocompletion suggest header names for you
Supported language features:
- syntax highlighting
- auto-completion:
- snippets: E.g. if you type in an editor window:it suggests the autocompletion to
\I
and if you select it autofills with URL the clipboard text.\Image[]
Common named arguments also have shortcuts starting with open curly brace{
, e.g. for thetitle
argument it is{t
(open curly brace +t
)which expands to:{t
leaving your cursor after the{title=}
=
sign.see their definition fn the snippts file vscode/snippets.json. - ID autocompletion initiated by things like:Known limitations:
<
: internal cross file reference{tag=
:\H
tag
argument{parent=
:\H
parent
argument
- although separate words do find good matches in this case unlike in Ctrl + T, if you auto-complete after the first word, pre existing words are duplicated, e.g.:then the stuff before the last space remains rather than being replaced, leaving you with something like:
- type the first word
- space
- start the second word
- finish autocomplete on second word
The bult-in markdown extension does not support this either as it uses only slash separated "IDs" in its searches.<United United States>
Ctrl + T
"Open symbol by name". Shows matching Element IDs such as for headers, and allows you to quickly jump to them. It lists:- first lists IDs that start with the search
- then followed by IDs that contain the search anywhere inside them
Unfortunately, due to VS Code limitations, you cannot use spaces in the search as we would like, e.g.:will not find the IDfundamental theorem
fundamental-theorem-of-calculus
. This is because VS Code does not pass the search after the first space to the extension at all in theprovideWorkspaceSymbols
callback. It does work if you instead use the hyphen-
ID separator as in:fundamental-theorem
Using%
SQL wildcards such as in:also does not work. Looking at debug logs, we see that the correct SQL queries are generated and the correct rows returned, but VS Code must be doing a hardcoded post-processing step that removes the matches afterwards, so it also seems to be out of our control.fund%ntal
TODO: the built-in markdown extension handles spaces on Ctrl+T. Understand how it works.ID extraction is performed on .bigb files automatically withourbigbook --no-render file.bigb
whenever new changes are saved to disk e.g. with Ctrl + S. This ensures that the ID database is kept up to date so that Ctrl + T and autocompletion will work.Ctrl + Shift + O
: ID search in the current file. Somewhat of a subset ofCtrl + T
, but works faster, is more scoped if you know your ID is in the current file, and allows you to type spaces.- Ctrl + Click: jump to definition. If you hover over internal cross file reference-like elements such anywhere over the following:then the editor jumps to their definition point.
<New York>
{tag=New York}
{parent=New York}
- outline, sticky scroll, breadcrumb. These features allow you to quickly navigate between headers of the current file in a tree structure. Consider adding the following shortcut to reveal the outline sidebar on
Ctrl + 3
:As you add or remove lines to the document, the outline becomes immediately outdated. To update it, make sure to save the document (Ctrl + S) and wait a few seconds.{ "key": "ctrl+3", "command": "outline.focus" },
Drag and drop editing gets requested from time to time but just dies to the bot:though in our case it wouldn't be so simple as\H
parent
arguments would also have to be adjusted. - commands: all our command shortcuts are defined to only work on OurBigBook files (
.bigb
extension) to avoid cluttering non-OurBigBook projects. This is done as vscode does not seem to have the concept of "project type" built into it. If you want the build and launch shorcuts to work on your project for any file, also define build and launch commands under.vscode
Build all
: save the current file is unsaved, and then build the project toplevel directory withourbigbook .
Build all and view current output file
: doourbigbook .
and then open the HTML output for the current file in your browserKnown limitations: snap browsers on Ubuntu 24.04 can't access dotfiles, so CSS and JavaScript will be broken because they go under~/.vscode
when building with the extension: askubuntu.com/questions/1238211/how-to-make-snaps-access-hidden-files-and-folders-in-home#comment2676255_1238219 We don't know how to work around this besides not using the snap version of browsers.
TodoThe default markdown support is incredible and should serve as inspiration for this extension: code.visualstudio.com/docs/languages/markdown
- deployment as static website or to OurBigBook Web via command
- live side-by-side HTML preview, maybe we could learn a bit from the Asciidoc extension
Historically, Vim support came first and was better developed. But that was just an ad-hoc path of least resistance, VS Code is the one we are going to actually support moving forward.
Our syntax highlighting attempts mostly to follow the official HTML style, which is perhaps the best maintained data-language. We have also had a look at the LaTeX, Markdown and Asciidoctor ones for refernce.
One current fundamental limitation of VS Code is that there is no way to preview images and mathematics inline with text: stackoverflow.com/questions/52709966/vscode-is-it-possible-to-show-an-image-inside-a-text-buffer If only it were able to do that, it would go a long way to being as good as a WYSIWYG interface.
The source code for the extension is located under: vscode
It's the standard procedure for any extension:
- open a new workspace with just the extension at toplevel:
- Ctrl + Shift + N
- Ctrl + Shift + P
- Open workspace from file
- select vscode/vscode.code-workspace
- In the new window from any file:This opens a new window titled "Extension Development Host". You will likely then want to open any .bigb file from that window to test out the extension
- Ctrl+Shift+P
- Debug: Start Debugging
- from there on:
- make changes on the "vscode" workspace
- test them on the "Extension Development Host" window. To reload changes either:
- from the Extension Host run the "Developer: Reload Window" command to make extension changes take effect. We recommend adding a shortcut "Alt + Shift + R" for that
- from the "vscode" workspace, restart the debug proces with the "Debug: Restart" command (default shortcut "Ctrl + Shift + F5")
Sometimes you need to change ourbigbook files like index.js when working on a new extension feature.
TODO: we don't have a neat way to handle this now. Currently;
vscode/package.json
uses fixed
and therefore does not pick up changes made to index.js.
ourbigbook
versions such as:
"dependencies": {
"ourbigbook": "0.9.11"
To work around that, you can hack that line to:
and:
"dependencies": {
"ourbigbook": ".."
cd vscode
npm install
The reason we don't use the
is failing with:
One thing we could do is to play it really nasty and hack
..
by default is that we are unable to release the extension with the ..
for an unknown reason because then:
npx vsce package
Executing prepublish script 'npm run vscode:prepublish'...
> ourbigbook-vscode@0.0.26 vscode:prepublish
> npm run compile
> ourbigbook-vscode@0.0.26 compile
> tsc -p ./
ERROR Command failed: npm list --production --parseable --depth=99999 --loglevel=error
npm ERR! code ELSPROBLEMS
npm ERR! invalid: katex@v0.11.1 /home/ciro/bak/git/ourbigbook/node_modules/katex
npm ERR! A complete log of this run can be found in:
npm ERR! /home/ciro/.npm/_logs/2024-08-05T15_51_22_124Z-debug-0.log
..
to a fixed version for relese, then hack it back to ..
immediately, always requiring a ourbigbook release for each vscode release.If you use the
otherwise the error does not go away.
..
hack, besides undoing the ..
change, before releasing you have to:
cd vscode
rm -rf node_modules package-lock.json
You can install the support with Vundle with:
or by directly dropping the files named below under your
set nocompatible
filetype off
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
Plugin 'gmarik/vundle'
let g:snipMate = {}
let g:snipMate.snippet_version = 1
Plugin 'MarcWeber/vim-addon-mw-utils'
Plugin 'tomtom/tlib_vim'
Plugin 'garbas/vim-snipmate'
Plugin 'ourbigbook/ourbigbook', {'rtp': 'vim'}
~/.vim/
, e.g. vim/syntax/ourbigbook.vim
The following support is provided:
- vim/syntax/ourbigbook.vim: syntax highlighting.As for other programming languages, this cannot be perfect without actually parsing, which would be slow for larger files.But even the imperfect approximation already covers a lot of the most cases.Notably it turns off spelling from parts of the document like URLs and code which would otherwise contain many false positive spelling errors.
- vim/snippets/ourbigbook.snippets: snippets for github.com/honza/vim-snippets, which you also have to install first for them to work.For example, with those snippets installed, you can easily create links to headers. Suppose you have:To create an cross reference to it you can:
= My long header
and it will automatically expand to:- copy
My long header
to the clipboard, see copy to clipboard shortcuts at: stackoverflow.com/questions/3961859/how-to-copy-to-clipboard-in-vim/67890119#67890119 - type
\x
and then hit tab
This provides a reasonable alternative for ID calculation, until a ctags-like setup gets implemented (never/browser editor with preview-only? ;-))\x[my-long-header]
Syntax highlighting can likely never be perfect without a full parser (which is slow), but even the imperfect approximate setup (as provided for most other languages) is already a huge usability improvement.We will attempt to err on the side of "misses some stuff but does not destroy the entire page below" whenever possible. - copy
- mappings:
<leader>f
, which usually means,f
(comma then F): start searching for a header in the current file. Does a regular/
search without opening any windows, to is it very ligthweight. Mnemonic: "Find".<leader>h
(requires Fugitive to be installed): sets up theObbGitGrep
command, which searches for header across all git tracked files in the current Git repository. After,g
you are left in the prompt with:so if you complete that by:ObbGitGrep
it will match headers that start withObbGitGrep animal kingdom
animal kingdoom
case insentively, e.g.:Vim regular expressions are accepted, e.g. if you don't want it to start with the search pattern:= Animal kingdom tree = Animal kingdom book
The command opens a new tab (technically a "Vim error window") containing all matches, where you can click Enter to open one of them.ObbGitGrep .*animal kingdom
Mnemonic: "Header search".
A simple way to develop is to edit the Vundle repository directly under
~/.vim/bundle/ourbigbook
.There are two versions of this editor:
- static editor: a browser-only toy/demo with no persistent storage
- web editor: this is editor present in OurBigBook Web. It is linked to the database, and has further features added on top of the static editor.
Issues for the editor are being tracked under: github.com/ourbigbook/ourbigbook/labels/editor
We must achieve an editor setup with synchronized live side-by-side preview.
Likely, we will first do a non WYSIWYG editor with side by side preview with scroll sync.
Then, if the project picks up steam, we can start considering a full WYSIWYG.
It would be amazing to have a WebKit interface that works both on browser for the and locally.
Possibilities we could reuse:
- CKeditor ckeditor.com/ Used e.g. by Trilium Notes.
- Editor.jsReturns JSON AST!
- website: editorjs.io/
- source: github.com/codex-team/editor.js
- WYSIWYG: yes
- preview scroll sync: yes
- StackEdit
- markup implementation: PageDown
- website: stackedit.io
- source: github.com/benweet/stackedit
- demo: stackedit.io/app
- WYSIWYG: no
- preview scroll sync: yes
- Editor.md
- website: github.com/pandao/editor.md
- source: github.com/pandao/editor.md
- demo: pandao.github.io/editor.md
- WYSIWYG: no
- preview scroll sync: yes but buggy when tested 2019-12-12 on live website
- Custom editor and highlight via highlight.js.
- markup implementation: custom
- website: markdown-it.github.io
- source: github.com/markdown-it/markdown-it
- WYSIWYG: no
- preview scroll sync: yes
- editor hangs on large input: yes
- Quill.md
- website: quilljs.com
- source: github.com/quilljs/quill/
- demo: pandao.github.io/editor.md
- WYSIWYG: yes
- markdown output: no github.com/quilljs/quill/issues/74
- ui.toast.com/tui-editor/
- www.froala.com/wysiwyg-editor
The "static editor" is the Browser editor with preview.
It can be viewed live at: docs.ourbigbook.com/_obb/dist/editor and its main source code is located at: editor.html.
The static editor is a browser-only toy/demo with no persistent storage. We call it "static" because it is able to run on a static website, as opposed to the more advanced editor present in OurBigBook Web, which interacts fully with a dynamic database. Both static and dynamic editor codebases are highly factored however, which is why they look identical.
That editor can be viewed directly locally with:
git clone https://github.com/ourbigbook/ourbigbook
cd ourbigbook
npm install
npm run build-assets
firefox dist/editor.html
You can also speed up the interactive development loop of editor.html with:
as usual when dealing with the
npm run webpack-dev
dist
directory.This is editor present in OurBigBook Web. It is linked to the database, and has further features added on top of the static editor.
A lot of effort has been put into making error reporting as good as possible in OurBigBook, to allow authors to quickly find what is wrong with their source code.
Error reporting is for example tested with
assert_error
tests in test.js.Please report any error reporting bug you find, as it will be seriously tracked under the:
error-reporting
label.Notably, OurBigBook should never throw an exception due to a syntax error, as that prevents error messages from being output at all.
One important philosophy of the error reporting is that the very first message should be the root cause of the problem whenever possible: users should not be forced to search a hundred messages to find the root cause. In this way, the procedure:should always deterministically lead to a resolution of all problems.
- solve the first error
- reconvert
- solving the new first error
- reconvert
- etc.
Error messages are normally sorted by file, line and column, regardless of which conversion stage they happened (e.g. a tokeniser error first gets reported before a parser error).
There is however one important exception to that: broken cross references are always reported last.
For example, consider the following syntactically wrong document:
Here we have an unterminated code block at line 5.
= a
\x[b]
``
== b
However, this unterminated code block leads the header
b
not to be seen, and therefore the reference \x[b]
on line 3 to fail.Therefore, if we sorted naively by line, the broken reference would shoe up first:
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:5:1: unterminated literal argument
But in a big document, this case could lead to hundreds of undefined references to show up before the actual root unterminated literal problem.:
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:4:3: cross reference to unknown id: "b"
error: tmp.bigb:5:3: cross reference to unknown id: "b"
...
error: tmp.bigb:1000:1: unterminated literal argument
Therefore, we force undefined references to show up last to prevent this common problem:
error: tmp.bigb:1000:1: unterminated literal argument
error: tmp.bigb:3:3: cross reference to unknown id: "b"
error: tmp.bigb:4:3: cross reference to unknown id: "b"
error: tmp.bigb:5:3: cross reference to unknown id: "b"
...
OurBigBook is designed to not allow arbitrary code execution by default on any OurBigBook CLI command.
This means that it it should be safe to just download any untrusted OurBigBook repository, and convert it with OurBigBook CLI, even if you don't trust its author.
In order to allow code execution for pre/post processing tasks e.g. from
prepublish
, use the --unsafe-ace
option.Note however that you have to be careful in general, since e.g. a malicious author could create a package with their own malicious version of the
ourbigbook
executable, that you could unknowingly run with with the standard npx ourbigbook
execution.OurBigBook HTML output is designed to be XSS safe by default, any non-XSS safe constructs must be enabled with a non-default flag or setting, see: unsafeXss.
Of course, we are walking on eggs, and this is hard to assert, so the best thing to do later on will be to parse the output e.g. with
DOMParser
to ensure that it is valid and does not contain any script
tags, but it is not as simple as that: stackoverflow.com/questions/37435077/execute-javascript-for-xss-without-script-tags/61588322#61588322XSS unsafe constructs lead to errors by default. XSS unsafe constructs can be allowed from the command line with:
or from the
./ourbigbook --unsafe-xss
ourbigbook.json
file with an entry of form:
"unsafeXss": true
- github.com/ourbigbook/ourbigbook/issues for any public contact, see also: Section 11.1. "OurBigBook issue tracker"
- for private contact email:
admin@ourbigbook.com
All our software is licensed by under the GNU Affero General Public License (AGPL): LICENSE.txt unless otherwise noted. This license basically means that if you use this software then you must publish any changes you make to it, even if you use it only in your own servers that serve external requests without publishing the software.
We require all contributions to give the OurBigBook Project non-exclusive rights to their contributions.
This means that contributors retain their copyright, and may reuse their part of the code as they see fit under additional licenses beyond AGPL, but so can the OurBigBook Project.
The AGPL can of course cannot never be revoked once it has been applied. This only means that copyright owners may at any point also release their IP under another license.
The main rationale for this right now is to allow the OurBigBook Project the flexibility to one day allow someone to pay for a license that doesn't require releasing their source code under the AGPL without having to get all contributors ever to agree. This scenario is very unlikely to ever happen.
The OurBigBook Projects's commitment is and always will be to provide free education for all, and we have no plans to ever make anything closed source. But if it ever happens that we absolutely run out of the only way to achieve the goals of free education is to make concessions and allow enterprise users to pay for using the site for their purposes, which is not the case at this point, we would like to keep that door open.
Such CLA would also make it easier for the OurBigBook Project to be able to fight in court to enforce the AGPL's term's should that need ever arise.
Install master globally on your machine:
so you can now run the
Note that this repository uses
git clone https://github.com/ourbigbook/ourbigbook
cd ourbigbook
npm link
npm link ourbigbook
npm run build-assets
ourbigbook
command from any directory in your computer, for example to convert the ourbigbook documentation itself:
ourbigbook .
outputOutOfTree
, and so the output will be present at out/html/index.html
rather than index.html
.We also have a shortcut for
npm link
and npm link ourbigbook
:
npm run link
npm run link
produces symlinks so that any changes made to the Git source tree will automatically be visible globally, see also: stackoverflow.com/questions/28440893/install-a-locally-developed-npm-package-globally The symlink structure looks like:
/home/ciro/ourbigbook/node_modules/ourbigbook -> /home/ciro/.nvm/versions/node/v14.17.0/lib/node_modules/ourbigbook -> /home/ciro/ourbigbook
As mentioned at useless knowledge, most users don't want global installations of OurBigBook. But this can be handy during development, as you can immediately see how your changes to OurBigBook source code affect your complex example of interest. For example, Ciro developed a lot of OurBigBook by hacking github.com/cirosantilli/cirosantilli.github.io directly with OurBigBook
master
.Just remember that if you add a new dependency, you must redo the symlinking business:
Asked if there is a better way at: stackoverflow.com/questions/59389027/how-to-interactively-test-the-executable-of-an-npm-node-js-package-during-develo. The symlink business can be undone with:
npm install <dependency>
npm run link
npm unlink
rm node_modules/ourbigbook
Run OurBigBook master mentions how to install and then run OurBigBook master globally, which is useful build some projects locally on master.
To instead install locally in the current directory only instead, which can be useful for bisection:
npm install
ln -s .. node_modules/ourbigbook
npm run build-assets
You can now run tests as:
or the executable interactively as:
It also works from a subdirectory:
npm test
./ourbigbook .
mkdir -p tmp
cd tmp
../ourbigbook .
Run all tests:
npm test
To run all tests on PostgreSQL as in the OurBigBook Web, first setup the PostgreSQL database similarly to local run as identical to deployment as possible:
This got really annoying with PostgreSQL 15: stackoverflow.com/questions/67276391/why-am-i-getting-a-permission-denied-error-for-schema-public-on-pgadmin-4 And then run with:
createdb ourbigbook_cli
psql -c "CREATE ROLE ourbigbook_user with login password 'a'"
psql -c 'GRANT ALL PRIVILEGES ON DATABASE ourbigbook_cli TO ourbigbook_user'
psql -c 'GRANT ALL ON SCHEMA public TO ourbigbook_user'
psql -c 'GRANT USAGE ON SCHEMA public TO ourbigbook_user'
psql -c 'ALTER DATABASE ourbigbook_cli OWNER TO ourbigbook_user'
npm run test-pg
List all tests:
as per: stackoverflow.com/questions/41380137/list-all-mocha-tests-without-executing-them/58573986#58573986.
node node_modules/mocha-list-tests/mocha-list-tests.js main.js
Run just one test by name:
or on PostgreSQL:
As per: stackoverflow.com/questions/10832031/how-to-run-a-single-test-with-mocha todo: what if the test name is a substring? You will want these Bash aliases:
which allos you to just:
npm test -- -g 'one paragraph'
npm run test-pg -- -g 'one paragraph'
npmtg() ( npm test -- -g "$*" )
npmtpg() ( npm run test-pg -- -g "$*" )
npmtg one paragraph
npmtpg one paragraph
Run all tests that don't start with
This works because
cli:
:
npm test -- -g '^(?!cli:)'
-g
takes JavaScript regular expressions, so we can use negative lookahead, see also: stackoverflow.com/questions/26908288/with-mocha-how-do-i-run-all-tests-that-dont-have-slow-in-the-nameSuppose you selected a single test:
and want to inspect the ID database database status.
npm test -- -g 'cli: empty document'
On SQLite it is not currently possible as tests run on a temporary in-memory database. TODO create a way.
On PostgreSQL, you can just inspect the
That table is used to run each test, and will contain the contents of the last test executed.
ourbigbook_cli
table with the psql
command line executable, e.g..
psql ourbigbook_cli -c 'select * from "Id"'
Step debug during a test run. Add the statement:
to where you want to break in the code, and then run:
where
debugger;
npm run testi -- -g 'p with id before'
i
in testi
stands for inspect
from node inspect
. Also consider the alias:
npmtgi() ( npm run testi -- -g "$*" )
Note however that this does not work for tests that run the
but not working, related: stackoverflow.com/questions/23612087/gulp-target-to-debug-mocha-tests So for now, we are just printing the command being run as in:
so you can just re-run it manually with
This works since the
ourbigbook
executable itself, since those spawn a separate process. TODO how to do it? Tried along:
const out = child_process.spawnSync('node', ['inspect', 'ourbigbook'].concat(options.args), {
cwd: tmpdir,
input: options.stdin,
stdio: 'inherit',
});
cmd: cd out/test/executable-ourbigbook.json-outputOutOfTree && ourbigbook --split-headers .
node inspect
as in:
cd out/test/executable-ourbigbook.json-outputoutoftree && node inspect "../../../ourbigbook" --split-headers .
tmp
directory is not deleted in case of failure.There are two types of test in our test suite:
- These tests don't actually create files in the filesystem, and just mock the filesystem instead with a dictionary.Database access is not mocked however, we just use Sqlite's fantastic in-memory mode.Whenever possible, these tests check their results just from the abstract syntax tree tree returned by the API, which is cleaner than parsing the HTML. But sometimes HTML parsing is inevitable.
- tests that call the
ourbigbook
executable itself:- their titles are prefixed with
cli:
- they tend to be a lot slower than the API test
- can test functionality that is done outside of the
ourbigbook.convert
JavaScript API, notably stuff prevent in ourbigbook, so they are more end to end - don't do any mocking, and could therefore be more representative.However, as of 2022, we have basically eliminated all the hard database access mocking and are using the main database methods directly.So all that has to be mocked is basically stuff done in the ourbigbook executable itself.This means that except for more specific options, the key functionality of ourbigbook, which is to convert multiple paths, can be done very well in a non executable test.The only major difference is that instead of passing command line arguments like in
ourbigbook .
to convert multiple files in a directory, you have to useconvert_before
andconvert_before_norender
and specify conversion order one by one.This test robustness is new as of 2022, and many tests were previously written with executable that would now also work as unit tests, and we generally want that to be the case to make the tests go faster. - work by creating an actual physical filesystem under
out/test/<normalized-test-title>
with the OurBigBook files and other files likeourbigbook.json
, and then running the executable on that directory.npm test
first deletes theout/test
directory before running the tests. After running, the generated files are kept so you can inspect them to help debug any issues. - all these tests check their results by parsing the HTML and searching for elements, since here we don't have access to the abstract syntax tree. It wouldn't be impossible to obtain it however, as it is likely already JSON serializable.
- their titles are prefixed with
Source files:
- index.js: main OurBigBook Markup conversion code
- ourbigbook: CLI executable. Is basically just a CLI interface frontend to
convert
- test.js: contains all the Mocha tests, see also: test system
- README.md: minimal Markdown README until GitHub / NPM support OurBigBook :-)
- ourbigbook_runtime.js: runtime features
- main.scss this file simply contains the customized CSS for docs.ourbigbook.com/ and does not get otherwise distributed with OurBigBook, see: CSS
dist/
contains fully embedded packaged versions that work on browsers as per common JavaScript package naming convention. All the following files are generated with Webpack with:
npm run webpack
npm run build-assets
.The files in that directory are:
dist/ourbigbook.js
: OurBigBook JavaScript API converter for browser usage. The source entry point for it is located at index.js. Contains the code of every single dependency used fromnode_modules
used byindex.js
. This is notably used for the live-preview of a browser editor with preview.dist/ourbigbook_runtime.js
: similardist/ourbigbook.js
, but contains the converted output of ourbigbook_runtime.js. You should include this in every OurBigBook HTML output.dist/ourbigbook.css
: minimized CSS needed to view OurBigBook output as intended. Embeds all OurBigBook CSS dependencies, notably the KaTeX CSS without which mathematics displays as garbage. The Sass entry point for it is: ourbigbook.scss.dist/editor_css.css
: the CSS of the editor, rendered from editor.scss.
To develop these files, you absolutely want to use:
This runs Webpack in development mode, which has two huge advantages:
npm run webpack-dev
- almost instantaneous compilation, as opposed to the unbearable 5 seconds+ of an optimized build
- source maps are enabled, so you can see the fully: blog.jakoblind.no/debug-webpack-app-browser/
npm run webpack-dev
also enables watch mode, so it keeps running until you turn it off.This setup also works seamlessly when developing OurBigBook Web, just let the watch process run in a separate terminal.
When publishing with OurBigBook CLI, certain files such as the
dist
directory are placed under the _obb
directory on the final output.Because
_obb
is a reserved ID, we can safely dump any autogenerated files under _obb
without fear of name conflicts with other files.OurBigBook stores some metadata and outputs it generates/needs inside the
./out/
directory that it creates inside the --outdir <outdir>
.Overview of files it contains:
db.sqlite3
: cross file reference internalspublish
: a git clone of the source of the main repository to ensure that untracked files won't accidentally go into the outputpublish/out/db.sqlite3
: likeout/db.sqlite3
but from the clean clone ofout/publish
publish/out/publish
: the final generated output directory that gets published, e.g. as in publish to GitHub Pages
A conversion follows the following steps done for each file to be converted:
- tokenizer. Reads the input and converts it to a linear list of tokens.
- parser. Reads the list of tokens and converts it into an abstract syntax tree. Parse can be called multiple times recursively when doing things like.
- ast post process pass 1.An ast post process pass takes abstract syntax tree that comes out of a previous step, e.g. the original parser output, and modifies the it tree to achieve various different functionalities.We may need iterate the tree multiple times to achieve all desired effects, at the time of writing it was done twice. Each iteration is called pass.You can view snapshots of the tree after each pass with the
--log
option:ourbigbook --log=ast-pp-simple input.bigb
This first pass basically does few but very wide reacing operations that will determine what data we will have to fetch from the database during the followng DB queries step.It might also do some operations that are required for pass 2 but that don't necessarily fetch data, not sure anymore.E.g. this is where the following functionality are implemented:- synonym and scope
\OurBigBookExample
and--embed-includes
: they are stitched into the main AST
- ast post process pass 2: we now do every other post process operation that was not done in pass 1, e.g.:
- insane paragraphs, lists and tables
- ast post process pass 3: this does some minimal tree hierarchy linking between parents and children. TODO could it be merged into 2? Feels likely
- render, which converts our AST tree into a output string. This is run once for the toplevel, and once for every header of the document if
-S
,--split-headers
are enabled. We need to do this because header renders are different from their toplevel counterparts, e.g. their first paragraph has idp-1
and notp-283
. All of those renders are done from the same parsed tree however, parsing happens only once.This step is skipped when using the--no-render
option, or during ID extraction.TODO it is intended that it should not be possible for there to be rendering errors once the previous steps have concluded successfully. This is currently not the case for at least one known scenario however: cross references that are not defined.Sub-steps include:- DB queries: this is the first thing we do during the rendering step.Every single database query must be done at this point, in one go.Database queries are only done while rendering, never when parsing. The database is nothing but a cache for source file state, and this separation means that we can always cache input source state into the database during parsing without relying on the database itself, and thus preventing any circular dependencies from parsing to parsing.[ref]Keeping all queries together is fundamental for performance reasons, especially of browser editor with preview in the OurBigBook Web: imagine doing 100 scattered server queries:vs grouping them together:
SELECT * from Ids WHERE id = '0' SELECT * from Ids WHERE id = '1' ... SELECT * from Ids WHERE id = '100'
It also has the benefit of allowing us to removeSELECT * from Ids WHERE id IN ('0', '1', ..., '100')
async
/await
from almost every single function in the code, which considerably slows down the CPU-bound execution path.As an added bonus, it also allows us to clearly see the impact of database queries when using--log perf
.We call this joining up of small queries into big ones "query bundling".
- at the every end of the conversion, we then save the database changes calculated during parsing and post processing back to the DB so that the conversion of other files will pick them up.Just like for the SELECT, we do a single large INSERT/UPDATE query per database to reduce the round trips.
Conversion of a directory with multiple input files works as follows:The two pass approach is required to resolve cross references
- do one ID extraction pass without render
- do a global database check/fixup for all files that have been parsed which checks in one go for:Ideally, failure of any of the above checks should lead to the database not being updated with new values, but that is not the case as of writing.
- check that all cross reference targets exist.When using the
\x
magic
argument:- only one of the plural/singular needs to exist
- we then decide which one to use and delete the other one. Both are initially placed in the database during the ID extraction phase.
- duplicate IDs
- references from one non-header title to another non-header title as mentioned at
\x
withintitle
restrictions
- do one conversion pass with render. To speed up conversion, we might at some point start storing a parsed JSON after the first conversion pass, and then just deserialize it and convert the deserialized output directly without re-parsing.
One of the main two passes done during conversion, where the files are parsed and all references stored in the database.
The implementation of much of the functionality of OurBigBook involves manipulating the abstract syntax tree.
The structure of the AST is as follows:
AstNode
: contains a map from argument names to the values of each argument, which are of type AstArgumentAstArgument
: contains a list ofAstNode
. These are generally just joined up on the output, one after the other.One important exception to this are plaintext nodes. These nodes contain just raw strings instead of a list of arguments. They are usually the leaf nodes.
We can easily observe the AST of an input document by using the
--log
following log options:
ourbigbook --log=ast-simple input.bigb
ourbigbook --log=ast input.bigb
For example, the document:
produces with
= My title
{c}
A link to \x[another-title]{c}{p} and more text.
== Another title
--log=ast-simple
the following output:
ast Toplevel
arg content
ast H id="tmp"
arg c
arg level
ast plaintext "1"
arg numbered
ast plaintext "0"
arg scope
ast plaintext "0"
arg splitDefault
ast plaintext "0"
arg synonym
ast plaintext "0"
arg title
ast plaintext "My title"
ast P id="p-1"
arg content
ast plaintext "A link to "
ast x
arg c
arg child
ast plaintext "0"
arg full
ast plaintext "0"
arg href
ast plaintext "another-title"
arg p
arg parent
ast plaintext "0"
arg ref
ast plaintext "0"
ast plaintext " and more text."
ast Toc id="toc"
ast H id="another-title"
arg c
ast plaintext "0"
arg level
ast plaintext "2"
arg numbered
ast plaintext "0"
arg scope
ast plaintext "0"
arg splitDefault
ast plaintext "0"
arg synonym
ast plaintext "0"
arg title
ast plaintext "Another title"
The following scripts generate parametrized OurBigBook examples that can be used for performance or other types of interactive testing:
./generate-deep-tree 2 5 > deep_tree.tmp.bigb ./ourbigbook deep_tree.tmp.bigb
Originally designed to be able to interactively play with a huge table of contents to streamline JavaScript open close interaction.
./generate-paragraphs 10 > main.bigb
Output:
0
1
2
3
4
5
6
7
8
9
We have stopped making any effort to generate nicely indented HTML output as it just felt not worth it.
Instead, if you want to debug some badly formatted HTML you can just use our pre-installed js-beautify dependency, e.g. with:
npx js-beautify out/html/index.html
To log some performance statistics, use: performance log.
One quick and dirty option is to use
generate-paragraphs
which generates output compatible for most markup languages:
./generate-paragraphs 100000 > tmp.bigb
On Ubuntu 20.04 Lenovo ThinkPad P51 for example:
- OurBigBook 54ba49736323264a5c66aa5d419f8232b4ecf8d0 + 1, Node.js v12.18.1
outputs:
time ./ourbigbook tmp.bigb
real 0m5.104s user 0m6.323s sys 0m0.674s
- Asciidoctor 2.0.10, Ruby 2.6.0p0:
outputs:
cp tmp.bigb tmp.adoc time asciidoctor tmp.adoc
real 0m1.911s user 0m1.850s sys 0m0.060s
- cmark 0.29.0:
outputs:
cp tmp.bigb tmp.md time cmark tmp.md > tmp.md.html
Holy cow, it is 200x faster than Asciidoctor!real 0m0.091s user 0m0.070s sys 0m0.021s
- markdown-it at 5789a3fe9693aa3ef6aa882b0f57e0ea61efafc0 to get an idea of a JavaScript markdown implementation:
outputs:
time markdown-it tmp.md > tmp.md.html
real 0m0.361s user 0m0.590s sys 0m0.060s
cat
just to find the absolute floor:outputs:time cat tmp.bigb > tmp.tmp
real 0m0.006s user 0m0.006s sys 0m0.000s
On P51:
time ./ourbigbook --no-render . && time ./ourbigbook -S --log=perf README.bigb
:- ourbigbook 39e633f08b2abce10331b884c04d70dbe6d4565a before moving OurBigBook to Sequelize: 14s
convert README.bigb perf start 248.0712810009718 perf tokenize_pre 248.36641899868846 perf tokenize_post 2027.6697090007365 perf parse_start 2028.678952999413 perf post_process_start 2684.3162699975073 perf post_process_end 5017.946601998061 perf split_render_pre 6572.925067000091 perf render_pre 5018.202895000577 perf render_2_pre undefined perf render_post 13093.658641997725 perf end 13126.138450000435 perf convert_input_end 14281.568749997765 perf convert_path_pre_sqlite 14281.64151499793 perf convert_path_pre_sqlite_transaction 14281.835940998048 perf convert_path_post_sqlite_transaction 14551.673818998039 perf convert_path_end 14551.860617998987 convert README.bigb finished in 14309.703324001282 ms perf convert_path_to_file_end 14552.230636000633 real 0m14.602s user 0m16.500s sys 0m1.832
sqlite3 out/db.sqlite3 .schema
CREATE TABLE IF NOT EXISTS 'ids' ( id TEXT PRIMARY KEY, path TEXT, ast_json TEXT ); CREATE TABLE IF NOT EXISTS 'includes' ( from_id TEXT, from_path TEXT, to_id TEXT, type TINYINT ); CREATE INDEX includes_from_path ON includes(from_path); CREATE INDEX includes_from_id_type ON includes(from_id, type); CREATE TABLE IF NOT EXISTS 'files' ( path TEXT PRIMARY KEY, toplevel_id TEXT UNIQUE );
- ourbigbook 8e6a4311f7debd079721412e1ea5d647cc1c2941 after, OMG gotta debug perf now:
real 0m29.595s user 0m34.095s sys 0m4.427s convert README.bigb perf start_convert undefined perf tokenize_pre 411.71094800531864 perf tokenize_post 2265.3646410033107 perf parse_start 2266.436312004924 perf post_process_start 2905.8304330036044 perf post_process_end 12113.761793002486 perf split_render_pre 16534.5258340016 perf render_pre 12114.092462003231 perf render_post 40937.611143000424 perf end_convert undefined perf convert_input_end 42042.85608199984 perf convert_path_pre_sqlite 42042.92515899986 perf convert_path_pre_sqlite_transaction 42147.847070001066 perf convert_path_post_sqlite_transaction 42732.242132000625 perf convert_path_end 42732.35991900414 convert README.bigb finished in 42327.62727500498 ms perf convert_path_to_file_end 42732.534088000655 real 0m42.779s user 0m46.530s sys 0m6.945s
sqlite3 out/db.sqlite3 .schema
CREATE TABLE `Files` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `path` TEXT NOT NULL UNIQUE, `toplevel_id` TEXT UNIQUE); CREATE TABLE sqlite_sequence(name,seq); CREATE INDEX `files_path` ON `Files` (`path`); CREATE INDEX `files_toplevel_id` ON `Files` (`toplevel_id`); CREATE TABLE `Ids` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `idid` TEXT NOT NULL UNIQUE, `path` TEXT NOT NULL, `ast_json` TEXT NOT NULL); CREATE INDEX `ids_path` ON `Ids` (`path`); CREATE TABLE `Refs` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `from_id` TEXT NOT NULL, `from_path` TEXT NOT NULL, `to_id` TEXT NOT NULL, `type` TINYINT NOT NULL); CREATE INDEX `refs_from_path` ON `Refs` (`from_path`); CREATE INDEX `refs_from_id_type` ON `Refs` (`from_id`, `type`); CREATE INDEX `refs_to_id_type` ON `Refs` (`to_id`, `type`);
The question is: is it because we addedasync
Everywhere, or is it because of changes in the database queries?Answering the question: added old DB at: github.com/ourbigbook/ourbigbook/tree/async-slow-old-db and it is fast again. So DB debugging it is, hurray.
Tokenized token stream and AST can be obtained as JSON from the API.
Errors can be obtained as JSON from the API.
Everything that you need to write OurBigBook tooling, is present in the main API.
All tooling will be merged into one single repo.
Every OurBigBook document is implicitly put inside a
\Toplevel
document and:- any optionally given arguments at the very beginning of the document will be treated as arguments of the
\Toplevel
macro - anything else will be put inside the
content
argument of the\Toplevel
macro
E.g., a OurBigBook document that contains:
{title=My favorite title}
And now, some content!
is morally equivalent to:
In terms of HTML, the
\Toplevel{title=My favorite title}
[
And now, some content!
]
\Toplevel
element corresponds to the <html>
, <head>
, <header>
and <footer>
elements of a document.Trying to use the
\Toplevel
macro explicitly in a document leads to an error.- if the
title
argument oftoplevel
is given, use that - otherwise, if the document has a
\H[1]
, use the title of the first such header - otherwise use a dummy value
To add your own macro, try to copy from another existing macro that feels the closest in functionality.
For example, suppose we want something similar to line break. So let's grep:
git grep '\bbr\b'
Grep currently gives:
index.js:8018: 'br',
index.js:8841: 'br': function(ast, context) { return '<br>' },
index.js:9792: 'br': function(ast, context) { return '\n'; },
index.js:10289: 'br': ourbigbookConvertSimpleElem,
test_bigb_output.bigb:71:br:
test_bigb_output.bigb:74:aa\br
test_bigb_output.bigb:75:bb\br
test_bigb_output.bigb:79:asdf\br
test_bigb_output.bigb:98:Non-br macro without arguments\i
All functionality is contained in index.js as expected. Expanding the hits a but further we see:
const DEFAULT_MACRO_LIST = [
new Macro(
'br',
[],
{
phrasing: true,
}
),
const OUTPUT_FORMATS_LIST = [
new OutputFormat(
OUTPUT_FORMAT_HTML,
ext: HTML_EXT,
convert_funcs: {
'br': function(ast, context) { return '<br>' },
OUTPUT_FORMAT_ID,
{
ext: 'id',
convert_funcs: {
'br': function(ast, context) { return '\n'; },
new OutputFormat(
OUTPUT_FORMAT_OURBIGBOOK,
{
ext: OURBIGBOOK_EXT,
convert_funcs: {
'br': ourbigbookConvertSimpleElem,
To generate the CSS during development after any changes to that file, you must run:
which generates the final CSS file:
npm run sass
main.css
You then need to explicitly include that
where
main.css
file in your --template
. For example, our ourbigbook.liquid.html contains a line:
<link rel="stylesheet" type="text/css" href="{{ root_relpath }}main.css">
root_relpath
is explained under Section 5.5.25. "--template
".The file
ourbigbook.common.scss
contains stand-alone Sass definitions that can be used by third parties.One use case is to factor out OurBigBook style with the site-specific boilerplate.
E.g. a website that stores its custom rules under main.scss can do stuff like:
@import 'ourbigbook/ourbigbook.common.scss';
The main design goal on narrow screens is that there should never be horizontal scrolling enabled for the hole document, only on a per element basis.
Every foreign key should have a manually created associated index, this is not done automatically by neither PostgreSQL nor Sequelize:
TODO. Describe OurBigBook's formal grammar, and classify it in the grammar hierarchy and parsing complexity.
This section describes the release procedure for OurBigBook CLI, which is an npm package. For OurBigBook Web deployment see: OurBigBook Web deployment.
Before the first time you release, make sure that you can login to NPM with:
This prompts you to login via the browser with 2FA. Currently you can also tick a box to not ask again for the next 5 minutes, which should be enough for the following release command. If you don't select this option, you will be prompted midway through the release command for login.
npm login
Releases should always be made with the official www.npmjs.com/~ourbigbook-admin NPM user.
Then, every new release can be done automatically with the release script, e.g. to release a version 0.7.2:
or to just increment the minor version, e.g. from the current
That script does the following actions, aborting immediately if any of them fails:
./release 0.7.2
0.7.1
to 0.7.2
you could can omit the version argument:
./release
- runs the tests
- publishes this documentation
- updates
version
inpackage.json
- creates a release commit and a git tag for it
- pushes the source code
- publishes the NPM package
After publishing, a good minimal sanity check is to ensure that you can render the template as mentioned in play with the template:
cd ~
# Get rid of the global npm link development version just to make sure it is not being used.
npm uninstall -g ourbigbook
git clone https://github.com/ourbigbook/template
cd template
npm install
npx ourbigbook .
firefox out/html/index.html
The VS Code extension is automatically updated and released by the standard release procedure: Section 12.12.1. "Do the release".
This is done so that the extension always includes the latest version of the ourbigbook package.
This is not ideal as we would like for the extension to use the ourbigbook package version specified on each project's package.json.
However that is not easy to achieve, because in some cases we need to refactor ourbigbook to allow for a new extension feature, creating incompatibilities.
So for now, we ignore the problem and take the "easy to get started" approach of "always ship ourbigbook with the extension".
If changes are made only to the extension, it is also possible to release a new version of the extension alone with release-vscode:
./relese-vscode
The repository cirosantilli.com/ourbigbook-media contains media for the project such as for documentation and publicity
It was created to keep blobs out of this repository.
Some blobs were unfortunately added to this repository earlier on, but when we saw that we would need more and more, we made the sane call and forked that out.
The OurBigBook Project currently has a single top level executive, the OurBigBook Admin, who has ultimate power over the project.
There is currently no legal incorporated entity.
These will likely change if the project ever gets any traction, but for now things are being ran in an informal manner only.
Ciro Santilli is the founder and Absolute Magnanimous All Powerful Eternal Ruler (AMAPER) of the OurBigBook Project.
Ciro is a passionate about
- free education that allows learners to progress as fast as they want
- User Generated Content, which allows anyone to be the teacher
His motivations for starting the OurBigBook Project can be seen at: ourbigbook.com/cirosantilli/ourbigbook-com/philosophy
Ciro is a big Stack Overflow contributor, having reached top 50 yearly reputation leagues for several consecutive years in the early 2020's: ourbigbook.com/cirosantilli/ciro-santilli-s-stack-overflow-contributions and he has a few 1k+ star educational GitHub repositories: github.com/search?o=desc&q=user%3Acirosantilli&s=stars&type=Repositories.
- OurBigBook.com: ourbigbook.com/cirosantilli
- Homepage: cirosantilli.com
- GitHub: github.com/cirosantilli
- LinkedIn: www.linkedin.com/in/cirosantilli
- Twitter: twitter.com/cirosantilli
- Stack Overflow; stackoverflow.com/users/895245
Toplevel executive of the OurBigBook Project, who has ultimate power over the project.
OurBigBook.com account: ourbigbook.com/ourbigbook
GitHub account: github.com/ourbigbook-admin
OurBigBook Admins can select one article from any user to be pinned to the website's "front index pages" such as the global article, topics or user indexes.
The typical use case of this feature is to facilitate user onboarding, and it could also be used for general server announcemnets.
To modify the pinned article, admins must visit the "Site Settings" page under: ourbigbook.com/go/site-settings. That page can be accessed via the "Site Settings" button at the bottom of each index page.
Contributors who have made any non-trivial contribution to the project will be listed here with their consent!
Contributors are encouraged to add an intro about themselves and some profile links if they wish to their own section, but this is entirely optional.
- OurBigBook.com (case insensitive)
- twitter.com/OurBigBook
- mastodon.social/@OurBigBook: Twitter posts are mirrored there, as Twitter had become too restrictive towards 2024, e.g. with mandatory login
- github.com/OurBigBook GitHub organization (case insensitive)
- www.youtube.com/@OurBigBook (case insensitive)
- LinkedIn company: www.linkedin.com/company/ourbigbook
- LinkedIn group: www.linkedin.com/groups/9164882/ TODO vs the company.
- Facebook:
- page: www.facebook.com/OurBigBook (case insensitive)
- group: www.facebook.com/groups/OurBigBook (case insensitive)
- Discord: OurBigBook6998 "OurBigBook's Server" invite link: discord.gg/A8P5zGcWUh
- www.instagram.com/ourbigbook (case insensitive)
twitter.com/OurBigBook (case insensitive)
We are thinking about the following layout:We are planning on using a clear green on white color scheme to reflect the current website CSS.
- front: "OurBigBook.com" on upper left chest text
- back:
- "OurBigBook.com" on top across back. This positioning is crucial as it will show above chairs in amphitheatres
- logo below text centered
TODO ideas: understand what students wear, and the copy it with our logo. E.g. for Oxbridge, one could design college puffer jackets.
All items are available at: www.tshirtstudio.com/marketplace/ourbigbook-com from tshirtstudio.com. Each sale includes a 5 dollar/euro/pound donation to the project.
This is a reasonable website.
It is a shame that you can't easily drag and drop move/resize images on the web UI, which has led us to do that manually on the in the source images.
But still, relatively easy to use, and easy to setup a marketplace in.
Another downside is that it does not seem possible to edit existing designs, so it is a bit hard to know exactly what you had done when it is time to update things.
- www.redbubble.com/people/OurBigBook/shop (case insensitive) has a marketplace mechanism 6.64 pound/unit for the 21cm size, which is quite expensive. It is unclear how much they pay the creator.TODO: the following links are currently restricted because it is a new account:They said would be unrestricted in five business days, but it was still not true after one month.It appears that you have to upload five designs for anything to be publicly available... Contacted them and they confirmed that there is no workaround for that. That service is a bit crap. Have to find a new one later on.Their delivery is a bit slow, 7 business days in theory, but took at least 21 in reality. They must be streched thin.The product quality was good when it finally arrived though.
- stickerapp.co.uk/ does not seem to have a marketplace, 2 pounds/unit on an 11 unit order, so much cheaper. But why would we need 11 is the question. Just going with the marketplace for now.Standard delivery in a bout 7 business days.
Two sticker widths: 5 inch (12.7 cm) or 7.5 inch (19 cm). Also does t-shirts and hoodies. Design not showing on newly created shop page after several refreshes.
Rectangle widths available: 12.7 cm and 17.8 cm, which is reasonable. Both £5.99. Also does t-shirts and hoodies. Good design UI.
Only has rectangle of width 11.43 cm, price £5.70. Also does t-shirts and hoodies. Design UI a cluttered.
The logo can be seen at: Figure 1. "Logo of the OurBigBook Project".
Some rationale:
- the lowercase
b
followed by uppercaseB
gives the idea of big and small - the small
o
looks a bit like a degree symbol, which feels sciency. It also contributes to the idea of small to big:o
is smallest,b
a bit larger, andB
actually big - keep the same clear on black feeling as the default CSS output
- yellow, green and blue are the colors of Brazil, where Ciro Santilli was born!
It might be cool if we were able to come up with something that looks more like an actual book though instead of just using a boring lettermark.
A good point of the current design is that it suggests a certain simplicity. We want the explanations of our website to be simple and accessible to all.
In addition to the pictorial logo, we have also created a few textual logos which might be useful.
We first designed them as a way to take up upper left chest square space nicely on tshirtstudio.com T-shirts, as a long one line version of
ourbigbook.com
would be too small and unreadable.The main idea of the text logo is to make a letter square with uppercase monospace font letters:
Could make the
OUR
BIG
BOOK
.COM
OBB
red and other letters white. But that does come a bit closer to our dreaded ÖBB name competitor.Note that monospace fonts are not actually square, only fixed width: graphicdesign.stackexchange.com/questions/45260/name-for-type-that-has-the-same-width-and-height
Another idea to differentiate from ÖBB would be to go lowercase:
obb
We were thinking something like:
but we wonder if that wouldn't be too close to: www.learningforreal.org/. Maybe not.
Learn for real!
Another one that is also somewhat taken is:
www.teacherspayteachers.com/Product/You-Be-The-Teacher-Independent-Research-Project-Distance-Learning-3785062
You be the teacher
A free domain name was the key restriction.
We almost went with
destroyuni.com
!!! But Ciro regained his senses in the end. A two word domain would be sweet though.But Ciro was very happy with OurBigBook. Some other
<possessive><adjective><noun>
domains:Websites that accept banners:
- YouTube
- Patreon
Demo videos are uploaded to the official YouTube account: www.youtube.com/@OurBigBook
The video files together with the assets used to make them are also made available in the OurBigBook media repository under the
video/
directory.Video guidelines:
- desktop recording area size: 720x720. This could perhaps be optimized, but this is a reasonable size that works both as an YouTube Short and Twitter post.Previously we had been using 700x700, but at some point YouTube appears to have stopped generating 720p resolution for those, and 480p is just too bad.We've been happily using vokoscreenNG.A good technique is to move the recording window to the lower left bottom of the screen, which stops things from floating around too much.
- use Chromium/Chrome to record
- resize window to fit recording area horizontally by using the Ctrl + Shift + C debugger view. Make sure to also resize the browser window vertically (cannot be done on debugger, needs resizing actual window) otherwise you won't be able to scroll if the page is not taller than the viewport.
- be careful about single pixel black border lines straying in the recording area, they are mega visible against the clear chrome browser bar on the finished output!
- music style guidelines: cool, beats, techno, mysterious, upbeatSome of the videos contain non-fully free YouTube music added via the YouTube UI. Reupload together with the video files appears however allowed. Ideally we should use fully CC BY-SA music, but it is quite hard to find good ones. NC is not acceptable.
- hardcode subtitles in the video. No voice. Previously we were using Aegisub to create the subtitles in
.ass
format and ffmpeg to hardcode:but later we learnt about KDenlive support for subtitles and moved to that instead as it is even more convenient to have it all in one place. Use:ffmpeg -i raw.mkv -vf subtitles=sub.ass out.mkv
When recording, make sure that all key mouse action happens on the top half of the viewport, otherwise it will get covered by the subtitles in downstream editing.- 22pt white font with black background to improve readability
- aim to have 3/4 lines of subtitle maximum per frame
- on YouTube, add the video as the first video of the "Videos" playlist: www.youtube.com/playlist?list=PLshTOzrBHLkZlpvTuBdphKLWwU7xBV6VF This list is needed because otherwise YouTube's stupid "Shorts" features produces two separate timelines by default, one for shorts and one for non-shorts. With this list, all videos can be seen easily as non-shorts.
The OurBigBook Project has sporadically offered a fellowship called the "OurBigBook.com Fellowship". Its recipients are called the "OurBigBook.com Fellows".
The goal of the fellowship is to pay brilliant students to focus exclusively on pursuing ambitious goals in STEM research and education for a pre-determined length of time, without having to worry about earning money in the short term.
The fellowship is both excellency and need based, focusing on brilliant students from developing countries whose families were not financially able to support them.
Being brilliant, such students would be tempted and able to go for less ambitious jobs that pay them the short term. The goal of the fellowship is to free such students to instead pursue more ambitious, longer term goals.
Or in other words: to allow smart people to do whatever the fuck they really want to do.
The fellowship is paid as a single monetary transfer to the recipient.
There are no legally binding terms to the fellowship: we pick good people and trust them to do what they think is best.
The fellowship is more accurately simply a donation. There is no contract. Whatever happens, the OurBigBook Project will never able to take legal action against a recipient for not "using well" their donation.
The following ethical guidelines are however highly encouraged:
- to acknowledge the funding where appropriate, e.g.:
- at "funding slide" (usually the last one) of a presentation for work done during, or that you feel is a direct consequence of the fellowship
- by marking yourself as a "OurBigBook.com Fellow" on LinkedIn, under the organization: www.linkedin.com/company/ourbigbook for the period of award
- keep in touch. Let us know about any large successes (or failures!) you've have as the consequence of the funding, e.g. publications, starting a cool new job, or deciding to quit academia.
- give back culture: if one day, in a potentially far and undefined future, recipients achieve a stable financial situation with some money to spare, they are encouraged to give back to the OurBigBook.com Fellowship fund an amount at least equal to their funding.This enables us to keep sustainably investing in new brilliant talent who needs the money.We are more than happy to take the consider the fellow's suggestion for a recipient of their choice.Remember that an investment in the American stock market doubles every 10 years. So if you do go into a money making area, can you as a "person investment", match, or even beat the market? :-) Or conversely, the sooner you give back, the less you are morally required to give back.Fellows who go on to work on charitable causes, which includes the incredibly underpaid academics jobs, absolutely don't have to give back.If you are able to give back by doing a corresponding amount of good to the world all the better.It is you that have to look into your heart and decide: how much free or underpaid work have I done? And then if there is some money left after this consideration, you give that amount back.
- pivoting is OK. If you decide half way that your initial project plan is crap, change! We can only notice that something won't work once we try to do it for real. At least now you know!If you do pivot to something that makes money immediately however, the correct thing to do is to return any unused funds of the fellowship. The sooner you pay, the lesser your moral dividend obligation, right?
- be bold. Don't ever think "I'll take this safer option because it will allow me to pay back earlier".The entire goal of the scholarship is to allow smart people to take greater risks. If you took the risk, e.g. made a startup instead of going to a safer job, failed, and that made you make less money than you would have otherwise, no problem, deduce that cost from the value you can return in the future, and move on.But if you take a bet and it pays big time, do remember us ;-)
We also encourage fellows to take good care of their health, and strive for a good work/life balance. Exercise. Eat well. Rest. Don't work when you're tired. Take time off if when you are stressed. Keep in touch with good friends and family. Talk to someone if you feel down. Taking good care of yourself pays back with great dividends in the long run. Invest in it.
This section lists current and past OurBigBook.com Fellows. It is a requirement of the fellowship that fellows should be publicly listed here.
Publicly known updates on related to their fellowship projects may also be added here where appropriate, notably successes! But we also embrace failure. All must know that failure is a possibility, and does happen. If you can't fail, you're not dreaming big enough. Failing is not bad, it is inevitable.
2022-12: Letícia Maria Paz De Lima is awarded 10,000 Brazilian Real (~1,929 USD) to help her:
Focus on her quantum computing studies and research until 2023-06-30 (end of her third year), with the future intention of pursuing a PhD abroad in that area.
At the time of award, Letícia was a 3rd year student at the Molecular Sciences Course of the University of São Paulo and held a FAPESP Scientific Initiation Scholarship. She had become interested in Quantum Computing in the past year, and is passionate about working on that promising area of technology.
Her main mentors in the area have been professor Paulo Nussenzveig and Barbara Amaral of the Institute of Physics of the University of São Paulo.