Optimizing for "benchmarks"
But after reading negative reviews for other editors, I'm pretty sure someone will complain they can't open a big file, or paste in 100,000 LOC.
Opening a really big file or pasting a lot of data is sometimes used as a "benchmark" for editors. So I have "cheated" a little and optimized for it 😋
What is a benchmark
A benchmark is a way to compare different solutions or products. For example a car benchmark might be how long it takes to go a quarter mile from full stop. A benchmark for graphic cards might be the average rendering speed / "frames per second" (FPS) during a demo. A CPU benchmark might be how many MD5 hashes it can do per second. A hard disk drive (HDD) benchmark might be read/write speed.
Benchmarks make it easier to compare different products. It helps users do informative purchase decision without being a domain expert. It however becomes a problem when manufactures optimize the product for these benchmarks ...
Benchmarks for text editors
It's hard to come up with good benchmarks for text editors. These are the ones I think is most common.
- Open a large file
- Pasting large amounts of text
- Input latency
- Binary size (how much disk space the program takes up)
- Memory usage (how much RAM the program takes up)
- Start up time (how fast the program loads. Time to first text entry)
These however say little about the quality of a text editor. They just have to be good enough to not be annoying.
I guess it's the lowest hanging fruit benchmark wise. What else can you test !?
But some users really care for these "benchmarks", which I find rather silly, that's why I "cheat" and optimizing for them...
And some users do have warranted use cases, for example editing compiled code - which can be really huge files.
Open a large file
I have optimized the editor to only load the first part of a large file, then load new chunks when the user navigates the file.
This will make it fast to open files of almost any file size.
Pasting large amount of text
I have optimized the editor so that when you paste something large, the editor asks if you want to save it, then it sends all pasted data to the HDD, and reopens the file, starting from the chunk you where at when pasting, combined with the pasted data.
I covered input latency in another blog post. It depend on the client used. All browsers I tested use a fixed 60 fps. And the editor stay under the 16ms mark most of the time. After some optimizations for slow devices the editor should have around 1ms latency on most user interactions.
With many monitors supporting higher frame rate (FPS) now a days I hope browsers will up the max frame rate for the Canvas.
As it is now, the depending on timing, it can take up to 15-16 millisecond (ms) even though the editor executes the operation in less then one ms.
The editor is built with vanilla JS and the distribution only ships JS files as is. The zipped/compressed desktop release package is currently around two megabytes (MB), not bad!
But you also need Node.JS, and a browser to run the client. Almost all devices does already have a browser though.
Then you need to install the dependencies for the node.js server. I do however try to keep dependencies at a minimum.
Electron is a popular run-time for web apps on the desktop. Similar to nw.js. I however chose not to include Electron nor nw.js because most browsers can already run in chrome-less "app" mode - hiding the url bar and other browser chrome. Making the web app look like a "native" app.
"Electron" apps receive a lot of hate because each of them come with their own browser engine that consume both hard disk drive (HDD) and memory (RAM).
Running many "Electron" apps could max out the memory on a low end machine (a electron app uses around 200mb of memory).
Note though that my editor does not use Electron, it uses the build in browser! And have a separate back-end and front-end where the back-end is a Node.JS server, and the front-end is web browser app.
Funny enough my editor started out as a pure web app, then it was converted to a nw.js app (similar to Electron), then I separated the front-end and back-end.
Statically linked binaries
Native apps can achieve small binary sizes because they use platform libraries. Maintaining many versions of libraries is a nightmare for operating system (OS) developers though.
But it's possible to include all dependencies into one large binary executable, called statically linked binary. This binary can get really huge, up to several hundred megabytes (MB)!
But with consumer hardware approaching one terabyte (TB) HDD space or bigger, a few extra MB should not be an issue. Even solid state drive's have become much cheaper in recent years ... Large package can however be an issue for users with very slow/expensive Internet connection.
I remember having only 32 MB or so memory back in the 1990's... But back then memory was not an issue, unless you wanted to play Doom.
There where however a period between around 2000-2010 where memory where still expensive, but rich media web boomed, amount of software exploded, and software became overall less efficient. Having a lot of browser tabs open could easily make the computer "swap" (using HDD as memory). I remember upgrading to 512 MB memory, but still had issues. I later upgraded to 4 GB memory which was the max for 32 bit operating systems, but I still sometimes had performance issues due to not enough random access memory (RAM).
So last time I upgraded my computer I decided I don't want any more "swapping" so I went with 16 GB of memory and a 64 bit OS. And I have not had any memory issues to this day. Now almost all devices run in 64 bit, and RAM is cheap.
I had an idea to make a compiled binary with just a simple text box and then switch to the browser client once it had loaded,
kinda like a splash screen - that you can write on. But the statically linked (libraries baked in) "splash screen" binary used like 100 MB of HDD space!
So I have to choose between two diseases (plague or cholera), either have a large package size, or slow upstart time.
Personally I don't think one second is that slow, considering I only open the editor once. Then keep it open ... When the editor is installed globally via Node.JS package manager (NPM) you can write "jzedit nameoffile.txt" in the terminal, and it will open the file in a new tab, inside the already running editor!
I can't see why you would want to close and reopoen the editor for each file !? Maybe if your operating system (OS) doesn't allow multi tasking, but almost all OS's does.
Working with Streams in Nodejs
For the big file support I use Node.JS streams. Streams are very nice in theory, it's an abstraction over file buffers. It allows you to think about loading a file -
like if it was water pushed through a pipe. It's however very complicated to work with. The main problem is that you need to keep track of line breaks, and stream chunks/buffers
can split anywhere even in between line breaks. In Windows a line break is two characters, a Carriage-return symbol,
and a Line-feed symbol. While on unix/linux it's only a Line-feed symbol.
... (The story about the line break is pretty interesting and deserves it's own blog post...)
Stream chunks can even split inside utf8 characters, giving you only half the character code.
So each chunk need to be decoded into a utf8 string, taking into account the last bytes from the prior chunk. Then keep track of line breaks in order to know what part of the file to send to the client.
I found like 20 edge cases that I wrote a test for. Having automatic tests to confirm that my latest change didn't break a prior fix is really helpful. ... I should probably write a separate blog/tutorial about testing, because how nice automatic testing is.
Written by Johan Zetterberg January 7th, 2018.