diff --git a/assets/giga_chad.jpg b/assets/giga_chad.jpg new file mode 100644 index 0000000..7150f96 Binary files /dev/null and b/assets/giga_chad.jpg differ diff --git a/assets/google_cosplay.jpg b/assets/google_cosplay.jpg new file mode 100644 index 0000000..1eaf943 Binary files /dev/null and b/assets/google_cosplay.jpg differ diff --git a/assets/the_pipeline.svg b/assets/the_pipeline.svg new file mode 100644 index 0000000..b8a8fb2 --- /dev/null +++ b/assets/the_pipeline.svg @@ -0,0 +1 @@ +
Articles are
served with
Articles are...
Search is powered
by
Search is powered...
Merges to prod
trigger CI
Merges to prod...
If article is good,
it gets merged
to prod
If article is good,...
Large assets
like images are
stored here
Large assets...
Forgejo
Compiles articles to HTML
Compiles articles to HTML
Woodpecker CI
Writers, editors, etc.
Write...
Articles are
statically
compressed
Articles are...
Articles are registered
and updated with
Articles are registered...
Pull requests are opened on
Pull requests are opened on

Djot

Articles are written in
Djot...
Uses git for incremental
rendering, and author
annotations
Uses git for incremental...
GitNGINXBrotli
Compressed articles
are written back to
server
Compressed articles...
MeilisearchGit LFS
\ No newline at end of file diff --git a/src/how_to_run_a_journal.dj b/src/how_to_run_a_journal.dj new file mode 100644 index 0000000..14a12ae --- /dev/null +++ b/src/how_to_run_a_journal.dj @@ -0,0 +1,96 @@ +# Web-dev, and the Power of Simplicity. + +Hi! I'm Isaac Mills, I'm the guy managing the infrastructure behind Compute! In this article, I'd like to talk about just that: the infra behind this media outlet, how it all works, and why it is the way it is. + +## Plain text + +Plain text is as simple as it gets, yet it can be an extremely powerful tool. It's capable of being anything, and can also be transmuted into anything. Its infinite extensibility makes it a formidable tool that every developer should have in their arsenal. For our case, we use a lot of plain text. In fact, the article you're reading right now is written in plain text, _not with some web UI_. A while back, I found a markup language called [djot](https://djot.net). It was created by the same person who created CommonMark, a flavor of markdown, to be easier to parse and more featureful. Below is some example djot + +```djot +# Heading + +paragraph + +*bold* _italic_ _*bold italic*_ {-strikethorugh-} {+underline+} + +- list +- list + +1. list +2. list +3. list +``` + +The benefit of using djot is that it compiles directly to HTML, thus the jorunalists who have joined Compute don't need to learn HTML to write articles. They also don't need to learn a clunky slow website editor like Wix or Squarespace. + +This is another superpower of plain text, if we used Wix for our website, our journalists would need to learn how to use the Wix UI, and how to write articles _for_ that UI. If we needed to change our tooling at any time, they would need to re-learn everything for _that_ tool. Not only that, but we would need to port the entire website (_every_ article), to use that new tooling. This is not so with plain text, if things change in the pipeline, or even if you're just joining our publication, there's no need to re-learn how to write text. At worst, you just need to convert the plain text to another format of plain text (djot to HTML for example). All our journalists need to know how to do is write their articles in djot, and submit it to the team via the pipeline + +## The Pipeline + +![A flowchart of the pipeline](assets/the_pipeline.svg) + +Pictured above is the full pipeline that articles go through in order to reach you readers at home, it goes like this + +1. Articles are written by our journalists in a plain text format (djot in our case) +2. Once an article is done, the journalist who wrote the article opens a pull request on our git repository with their new article and associated assets +3. The article can be reviewed by the team and edits can be made by them. +4. Once the article has been edited, the pull request gets merged into the main branch of our git repository, which is where the articles you see live +5. From there, the article goes through CI and gets deployed (we'll get into that in more detail later) + +Basically, this is just the workflow you would use for code, but adapted for journalism. In other words, no learning curve for our journalists! And if they do need to learn it, then this is information that they *should* know _anyway_. The workflow you see above has been in the making since git was created in 2005, with the sole purpose of efficiently moving code from development, into production. In the places where this workflow is used, more overhead in the pipeline means time, and time means money. Thus the workflow used in those places should be as simple as possible to reduce overhead, and I see it no differently for our publication. If this workflow has worked for nearly 2 decades for a pathologically huge project like the Linux kernel (which git was tailor-made to handle), the better. + +## CI + +Consider the following: If I'm accepting untrusted code from the public into my open source project, and I need that code to be production ready, how can I ensure that the code I accept _is_ actually production ready. The answer is with _continuous integration_, or _CI_. The idea is that every piece of code submitted to an open source project would undergo automated testing, linting, and checking to ensure that nothing will break upon merging the code into the production code base. For a project like [`egui`](https://lib.rs/crates/egui), their CI pipeline contains 19 checks. + +Their pipeline checks if the library with your new code compiles to every platform it's compatible with, with every feature enabled. It also makes sure that your code is well-formatted, contains no conflicts of license, uses no libraries banned by the project, and contains no security advisories. The _only_ way this many checks can be done on every git commit, is through CI, GitHub Actions in egui's case. + +The way I've described CI so far has probably made it seem like the least simple thing you could add to your project, but CI is actually _quite_ simple, and it's an excellent example of how capable of a tool you can make by just building simple, on top of simple, on top of simple. Really, CI is just shell scripting with extra steps. Despite it's inherent simplicity, CI can serve not only as a means to filter bugs out of pull requests, but it can also be a simple way to communicate to open source developers _what a project wants_ out of their code. Instead of having to read a big `CONTRIBUTORS.md` file to get an idea of that, developers can know that their code is good quality if it just passes CI. + +Fortunately, the level of CI I've described above isn't required for journalism. Our CI simply compiles our journalists' unreviewed articles, and serves them on an un-indexed (not visible on production) web page so that they and the team can preview their work before merging it. Our CI is also responsible for indexing and publishing finished articles onto our production website. + +## Deployment + +Deployment is the most complex part of our pipeline; getting the written, production ready articles, onto the website _you_ are reading this on. As I said earlier in the article, we use CI to compile and index finished articles. The CI tool we use is called [Woodpecker CI](https://woodpecker-ci.org/), a self-hosted docker-based CI tool. Self-hosted basically means that we can run the CI tool on the same server we use to serve our website, making deployment to there as easy as moving the generated files into the directory that our web server is serving (NGINX in our case). What's important to know is that when a pull request is opened on this journal's git repository, and when a pull request is merged into production, Woodpecker CI will run a custom program that I wrote in Rust to... + +- Compile djot articles to HTML +- Minify and compress compiled HTML +- Index articles with our search engine + +Because our CI tool is running this code, we can know which articles need compilation, who wrote those articles, and if any articles need to be deleted. Our CI tools will put our code in the working directory of our git repo. And via environment variables, the CI tool will tell us which files have changed, how the pipeline was triggered (via pull request, code push, etc.), as well as which git branch production resides in, and which branch our code is currently in. Our code can then use this information to... + +- Stat the changed files, which is how we know which files need to be compiled, and which files have been deleted +- Run a blame on new articles, which is how we figure out who wrote them +- See if we have changed the production branch, and index new articles if so + +Everything I've described that the program can do so far can be done in just \~400 lines of Rust. This is the effect of buidling simple on top of simple, _and making an effort to retain simplicity_ by making the most out of the simple building blocks. (P.S: Our Rust code now also generates a title card image for each article, so they look better on platforms like Twitter and Discord. By using the same stradegy, I only needed to add \~230 lines to our Rust code) + +[![Google cosplay is not business-critical](assets/google_cosplay.jpg)](https://twitter.com/garybernhardt/status/1344341213575483399) + +My Rust code makes heavy use of git, a very simple yet amazing tool for adding functionality to plain text. + +- It allows us to separate the WIP and the finished articles +- It allows us to keep an accurate and automated reference of who wrote and edited each article +- It allows a copy of the entire publication to be stored in many different places as backups. +- It allows us to easily sync new articles and changes to any git-compatible software forge of our choice (we use [forgejo](https://forgejo.org/)) + +By taking advantage of the vast functionality that a simple tool like git gives us, our code can stay lean, while also being very mean. Speaking of staying lean... + +## Webpages (and staying based) + +![An image of the giga-chad](assets/giga_chad.jpg) + +There is no better example of the cons of complexity than the modern web. Most webpages are not only bloated with ads, cookie banners, autoplaying BS, popups and the like; but they are also inundated with copious amounts of JavaScript. We only use JavaScript in 2 places + +1. On our homepage to power the search bar and display articles +2. Our web design tool, Webflow, bundles a small amount of JavaScript in every page (more on that later) + +Other than that, the actual article pages, such as this one, depend on nothing but the JavaScript that Webflow bundles in. And for our homepage, it's built and optimized so it can be served statically with it's _one_ dependency. Basically, I wanted to make our website as [suckless](https://suckless.org/philosophy/) (as lightweight, and as free from bloat) as possible. I say _I_ wanted to because our founder wanted to use Wix originally, yuck. + +Instead of _that_, I used [Webflow](https://webflow.com/) to design our webpages. For a one-time fee of $24, you can have access to the Webflow editor for 1 month, and then export your web pages to HTML/CSS/JS. Webflow is very different from your average Wix/Squarespace, those editors are designed for non-programmers who don't know and don't want to know HTML or CSS. Webflow is an editor for _developers_ that know what they're doing. It gives you the full power of HTML and CSS in a responsive, visual editor; making it an incredibly flexible tool capable of generating very based and performant webpages. Unlike the latter tools which generate bloated and obfuscated garbage. + +In just HTML, CSS, and JS: The homepage of this journal compressed is around 152.42 KB large, whereas our old Wix website's homepage compressed is around 861.9 KB large. That's nearly a 6x size difference. That's the effect of building your website in a UI that attempts to match up to the flexibility of HTML and CSS... without actually exposing you to the HTML and CSS. This is where Webflow thrives, as it is literally just a simple graphical editor _for_ HTML and CSS. It's another excellent example of what building simple around simple can do! + +## In conclusion + +Computers have an inconcievable amount of potential, but they're only as smart as their programmer. When you're building something with a computer, it's often much better to do more with less, than less with more. Don't use 17 different JavaScript frameworks with your hypervisor GPU WEB2.0 interface-driven scripting framework to drive your map/reduce-aware proxy-oriented software API. Just start simple, build simple, and if you need complexity, build it with more simple. The point I'm trying to make here is that bloat is your enemy, and it's often better for you, your team, and your users to just KISS (Keep it simple, stupid!) diff --git a/templates/article.html b/templates/article.html index e5762c1..2ed46e0 100644 --- a/templates/article.html +++ b/templates/article.html @@ -16,7 +16,7 @@ - + @@ -77,8 +77,8 @@
{{author.full_name}}
{% endfor %} -

- {{published}} +

+ {{read_time}} minute read ยท {{published}}

@@ -94,4 +94,4 @@ - \ No newline at end of file +