Announcing Alchemize v3 — faster, smarter, and sleeker than ever

Alchemize is a lightweight tool to minify and pretty-print your code snippets from around the web. It’s perfect for developers to format messy API responses, debug broken or unreadable code, learn new JS tricks from the Terser engine, or share polished snippets with teammates.

The application has a minimal interface and shows some HTML code with buttons to minify & pretty-print

Initially written in jQuery and after a good ten years since the last release, I thought it was time to rewrite my little code playground from scratch.

The new Alchemize is based on Preact and Vite for a more modern developer experience, uses the same editor component as VSCode, and natively supports light and dark mode. It’s been upgraded to mainly use Terser and Prettier under the hood, which brings updated JS & CSS support and should make these a lot easier to maintain going forward.


♻️ Modern tech stack A complete under-the-hood revamp for better performance and future-ready support.
🛠️ Industry-standard tooling Built on trusted technologies like Prettier and Terser for formatting and minification.
Latest JavaScript and CSS support Enjoy the most up-to-date syntax features and improved styling support.
📚 Better language detection Alchemize now does a smarter job at figuring out what kind of code you’re working with.


What people said about v2

Before Google killed off Chrome apps, the app had a loyal following on the Chrome Web Store where it got a bunch of great reviews which really made my day:

“Great for checking, untangling, prettifying, and minifying JSON and other formats.”

Richard H

“Very helpful, small and effective, without the bloat of a full editor — with great error-catching capability.”

Stefan C

“I’m using this more and more now. Perfect way to make my code look nice before sharing it.”

Andrew L

🚀 Try Alchemize v3 today

The new version is live — no installation required unless you want to install it as a desktop app. Just open the app, paste your code, and experience the magic:

👉 Launch Alchemize

And if you spot anything missing or have ideas for improvement, do get in touch or fork the project on GitHub.

What I’m doing for bookmarks now that Pocket is dead

Pocket was a pretty great little app. It let you save articles, read them later, tag them etc. I used it because it was integrated with my Kobo ebook reader, so I could save stories and read them on that. But then Mozilla unceremoniously killed it.

I wouldn’t have minded so much except that I’m using the API to power the bookmarks section of my website. I was using it to share articles I find particularly interesting, and there’s an RSS feed for the particularly adventurous. But that’s been broken for the last little while.

I migrated over to Instapaper since that seemed to be the reliable alternative, but their API has a manual human approval process, and I’m neither patient nor confident enough that will succeed.

So I had a new idea: what if I just post to Mastodon with a specific hashtag, then use the Mastodon API to collate them all together. That way get the benefit of sharing links on socials, but it’s also much easier than having another third party app.


Long story short I gave Kilo Code some specs and it built an implementation that searches the Mastodon API for posts matching any post from me with the hashtag #link (just searches for “#link from:@ash“), then it grabs the metadata and hashtags from the API, and merges those into my existing bookmarks.json file that runs the whole thing.

I’m not going to post code because you can probably write something more to your liking. But I thought the concept was cool, and now I’m thinking about other possibilities, like having a music recommendations playlist that pulls from my posts in the #NowListening hashtag.


I saw this post on BlueSky the other day and it tickled me slightly. And I was reminded of it because in a way, that’s exactly what I’m doing now. Using loosely marked up socials as the backend.

BlueSky and YouTube are down at the same time. This is because, to save money on storage costs, BlueSky just saves all our posts as comments on one kid’s middle school science fair presentation in 2011 (he lost)

Daniel Feldman @ bsky

How to code-split Svelte 5 components

Svelte doesn’t have a built-in way to code-split components. But the new syntax brings it closer to React, so we can borrow the same pattern.


I like to have two components in this setup:

  • MyComponent.svelte – the main component that we want to split into its own bundle
  • index.svelte – the component that shows a loading spinner & renders MyComponent once it’s loaded

There may be other ways to do this but I prefer the Typescript syntax. You can adjust to suit your purposes:

import { onMount, type Component as ComponentType } from 'svelte';
let props = $props();
let Component = $state<ComponentType | null>(null);
onMount(() => {
  // import() triggers the code split, and loads async.
  // webpackChunkName tells Webpack what to name the file (where applicable)
  import('./MyComponent.svelte' /* webpackChunkName: "my-component" */).then(module => {
    Component = module.default as ComponentType;
  });
});

From there we can check in the Svelte template for Component, and render it if it exists:

{#if Component}
  <Component {...props} />
{:else}
  <p>Loading...</p>
{/if}

Using MapLibre GL JS from a CDN in Typescript

When my recent project build came out at 2 megabytes for something that should have been dead simple, I realised bundling Maplibre, the webgl-based map library, was adding a lot of bulk.

Since we use maps in various projects and we don’t need to duplicate the library for each project/release I wanted to split it out onto our CDN and import it as required.

This guide is specific to MapLibre GL JS, but could apply to any large library, really.


Import the library

Maplibre isn’t an ES module, and I’m using a webpack config that won’t allow import(), so I wrote a small function to load a module as a global:

  const promises = {};
  function importModule(url) {
    if (promises[url]) {
      return promises[url];
    }
    promises[url] = new Promise((resolve, reject) => {
      const s = document.createElement('script');
      s.src = url;
      s.type = 'module';
      s.addEventListener('load', resolve);
      s.addEventListener('error', reject);
      document.head.appendChild(s);
    });
    return promises[url];
  }

From here we can load MapLibre from wherever we like. The docs suggest unpkg:

importModule('https://unpkg.com/maplibre-gl@^5.3.0/dist/maplibre-gl.js')
  .then(() => {
    console.log(window.maplibregl);
  });

Adding Typescript types to the CDN library

This works fine, but now Typescript has no idea what’s going on. We need to import the types.

Turns out you can plop a .d.ts file in the same directory as your code and Typescript will pick it up and provide types for the global variable.

Maplibre distributes .d.ts files in their releases, so I downloaded the release corresponding to my CDN version, unzipped the file, and placed it in the directory where I’m using maplibre. As if by magic, Typescript and JSDoc are now available:


Importing additional types

When you need specific types from the .d.ts file, you can import them as types. This seems obvious, but I got caught up with this.

import type { MapOptions } from './maplibre-gl';
const mapOptions = {
	container: mapRootEl,
	interactive: false
} as MapOptions;
map = new Map(mapOptions);

There’s two gotchas, don’t include .d.ts in the import name. Typescript will find it without, and complain with. Also make sure to import as type otherwise your build may fail.

And that’s about it. Everything TIL.

Using the cache API to store huge amounts of data in the browser

Recently I came across the Cache API. It’s used in Service Workers to prefetch/cache files for your offline web app, but it’s also available outside of workers.

I was looking at using the Cache API to cache generated images, to speed up a WebGL piece, without crashing Safari. Never got around to it, but now I’m looking at it, it’s kinda easy:


To open a cache and save an arbitrary string:

caches.open('SomeCacheName').then(async cache => {
    // put something in the cache
    await cache.put('SomeKeyName', new Response('Hello world'));

    // Get something back out of the cache
    const response = await cache.match('SomeKeyName');
    const text = await response.text();
    console.log(text); // Hello World
});

You can then reopen that cache at any time in the future to fetch your value:

caches.open('SomeCacheName').then(async cache => {
    const response = await cache.match('SomeKeyName');

    // treat the response object as you would a fetch() response
    const text = await response.text();
    console.log(text); // Hello World
});

While this example uses strings (retrieved with response.text()), you can store any number of formats including Blobs and ArrayBuffers. See the Response constructor for ideas.


The benefit of using the standard old browser cache is that you can store a LOT of data. My Mac reports the following:

navigator.storage.estimate().then((estimate) => {
    console.log('percent', (
        (estimate.usage / estimate.quota) *
        100
    ).toFixed(2));
    // percent 0.00

    console.log('quota', (estimate.quota / 1024 / 1024).toFixed(2) + "MB");
    // quota 10240.00MB
});

That’s 10 gigabytes of storage available. To be fair, not everyone will have that much space, but you get the idea.

It also needs to be cleaned up manually, otherwise it will sit in the cache permanently taking up space (unless the cache is cleared). The MDN page says:

The browser does its best to manage disk space, but it may delete the Cache storage for an origin. The browser will generally delete all of the data for an origin or none of the data for an origin.


I haven’t used this in production yet, but it seems to work fine. And browser support looks good. So let me know if this is useful.

For more reading, check out the MDN Cache API.

My script to auto-delete Google Maps reviews

Google Maps like much of the web has devolved into an AI generated slurry.

Nowadays every review I leave gets a reply “from” the business which is clearly generated by a machine. All the photos are inevitably going in to train Gemini. And the level-up gamification was fun at first but led to nothing more than endless grinding to reach the next level.

Anyway, I’m out. Have been for a little while. But I’ve left a trail of photos and digital detritus I’d rather clean up.

Google doesn’t have a way to bulk-delete your stuff (for obvious if annoying reasons) so I thought I’d write a little script to do it for me.

We're in Google Maps and there's an automation running to delete photos from the Local Guide section.

The scripts

There’s two scripts, one for photos and one for reviews. They’re pretty naive and need to be restarted from time to time but they should be safe enough.

Huge disclaimer: these will probably get out of date at some point and may not work. You should read and understand the code before you do anything with it.

For photos, make sure you’re on the “Photos” tab of the My Contribution section (see above), then I ran the following in the console:

(async () => {
  const sleep = (time) => new Promise(resolve => setTimeout(resolve,time));
  const go = async () => {
    // click the kebab
    document.querySelector('button[jsaction*="pane.photo.actionMenu"]:not([aria-hidden="true"])').click();
    await sleep(200);
    
    // click the delete menu item
    document.querySelector('#action-menu div[role="menuitemradio"]').click();
    await sleep(200);
    
    // click the "yes I'm sure" button
    document.querySelector('div[aria-label="Delete this photo?"] button+button,div[aria-label="Delete this video?"] button+button').click();
    await sleep(1500);
    
    // check if there's any left, and do it all again
    if(document.querySelector('button[jsaction*="pane.photo.actionMenu"]:not([aria-hidden="true"])')) go();
  }
  go();
})()

And for reviews on the reviews tab:

// delete reviews from Google Maps
(async () => {
  const sleep = (time) => new Promise(resolve => setTimeout(resolve,time));
  const go = async () => {
    // click the kebab
    document.querySelector('button[jsaction*="review.actionMenu"]:not([aria-hidden="true"])').click();
    await sleep(300);
    
    // find the delete review menu item
    const deleteMenuItem = document.querySelector('#action-menu div[role="menuitemradio"]+div');
    if(deleteMenuItem.textContent.trim() !== 'Delete review') return console.error('wrong menu item', deleteMenuItem.textContent);
    deleteMenuItem.click();
    await sleep(300);
    
    // click the "yes I'm sure" button
    document.querySelector('div[aria-label="Delete this review?"] button+button').click();
    await sleep(2000);
    
    // check if there's any left, and do it all again
    if(document.querySelector('button[jsaction*="review.actionMenu"]:not([aria-hidden="true"])')) go();
  }
  go();
})();

These are also on Github because my blog code formatting isn’t great. I should fix that up sometime.


What I learned hacking Google Maps (lol)

This code is pretty naive, and breaks a lot. I had to go back in a few times to restart the script, or adjust the timings when something broke. But it got there in the end.

I do appreciate the simplicity of the sleep() function/promiseified setTimeout. This isn’t in the language because it’s generally better to attach some kind of event or observer or do some polling to make sure the app is in the correct state. But in this case I thought it was a fairly elegant way to hack together a script in 5 minutes.

I could make this faster and more stable by implementing some kind of waitUntil(selector) function to await the presence of the menu/dialog in the DOM. But I would also need a waitUntilNot(selector) to wait for the deletion to finish. In any case that’s more complex, and we don’t need to overengineer this.

Anyway, the second script is done now and all my photos and reviews are gone. I’m still somehow a level 6 local guide (down from a level 7) so good for me.

Screenshots from Google Maps: you haven't written any reviews yet, add your photos to Google Maps + Ash is a Local Guide level 6

Dev log: Debugging Safari, an ogre with layers

We’re releasing a WebGL feature soon, and let me tell you Safari has lived up to its reputation as the new Internet Explorer.

We’ve had two big issues:

  1. Safari completely crashes the tab with “a problem repeatedly occurred”
  2. Safari WebGL rendering flickers when scrolling the page

Safari completely crashes the tab with “a problem repeatedly occurred”

The most concerning issue was the Safari crash, which only happened on iOS, not the simulator. I don’t have any iOS devices to test with, so I made the decision to get myself an iPad mini. It’s a write off!

Anyway I’m now the proud owner of a cute lil purple iPad and it still crashes so that’s a good thing, now I can work out how to fix it.

After debugging for way too long, I worked out this was a memory usage issue. Even though Safari runs on some of the most powerful mobile hardware around, it has a hard limit on how much RAM a web page can consume. Even when that page is open. So you can’t make full use of the device in Safari.

My problem had several causes:

  1. I was loading multiple WebGL interactives on page load. Deferring these until the user scrolls them into view helped fix the issue.
  2. I was caching WebGL textures to make the interactive feel snappier. In the end I had to remove all the caching optimisations to get under the Safari memory limit, and now each texture is rendered in realtime when it’s needed.
  3. On top of this, I was hitting an issue with too many layers in Safari causing excessive memory usage.

Compositing layers are created when animation happens in your page. Much like old cel animated films, the browser keeps content that isn’t likely to change in separate layers so it can sandwich animated layers in between, without having to draw everything all over again. For instance a parallax effect will have a background layer and a foreground layer, moving at different speeds.

A pink panther cel animation, showing the panther being lifted off the background.
An example of animation layers drawn on old school cel sheets. The pink panther can be added and removed from the scene without redrawing the layers underneath. Via The Art Professor.

Layers in Safari are created by a number of things, including:

  1. 3d transforms – e.g. -webkit-transform: translateZ(0);
  2. the will-change property – (intended to be used as a last resort. It should not be used to anticipate performance problems)
  3. Canvas layers – canvas is designed to be drawn and redrawn at arbitrary times, so the browser keeps it on its own layer.
  4. position:sticky/position:fixed – similar to parallax effects, these are just a part of doing business and we can’t optimise them any further

In our case, we had a number of 3d transforms and unnecessary will-change properties creating extra layers. These were contributing to Safari crashing. Cutting down on these layers stopped our page from crashing.


Safari WebGL rendering flickers when scrolling the page

This one was killing me because I couldn’t reproduce it on the iPad, and I don’t have an iPhone to test on.

This mostly seemed to happen in in-app browsers (like Slack), but this morning a colleague was able to reliably reproduce it in Safari proper, and I was able to reproduce it in the simulator.

It only seemed to happen while scrolling text boxes over the WebGL canvas So my assumption was that something was clearing the canvas without drawing the scene back in. The app is creating a separate offscreen WebGL instance for performing calculations and I assumed it might be some sort of weird race condition.

I tried a number of fixes that didn’t help:

  1. Render every frame, regardless of whether render is needed (i.e. don’t pause rendering when the scene hasn’t changed. Based on this similar bug).
  2. I tried enabling preserveDrawingBuffer in the renderer, because it seemed related. No dice.
  3. Disable anti-aliasing (per this bug, where Mapbox GL and Three.js fight it out in the same WebGL context)
  4. Downgrade to WebGL 1 (instead of 2). I can’t find the original post suggesting this, but it didn’t do anything.

The actual bug in my case was completely unrelated to Three.js.

When the screen resizes, I update the canvas size, the camera aspect, and projection matrix so that the canvas scales to fit its new dimensions:

  resizeToRoot() {
    const rect = this.root.getBoundingClientRect();
    this.renderer.setSize(rect.width, rect.height);
    this.camera.aspect = rect.width / rect.height;
    this.camera.updateProjectionMatrix();
  }

The problem is that this code doesn’t rerender the scene. So after it’s run the frame is left blank until the next requestAnimationFrame runs.

This wasn’t a huge problem, except when the Safari chrome disappears off the page the browser triggers a whole bunch of resizes in rapid succession. These were resizing the scene, and resulted in black frames until the scene rerendered, multiple times per second. And it didn’t happen on the iPad because the chrome never disappears offscreen.

Adding a this.renderer.render() to the resize function was a somewhat inefficient but effective fix.

How I rolled my own vector map tiles

OpenStreetMap is like the Wikipedia of maps. Back in the earlier days I used to love running around gathering data and mapping every neighbourhood I could.

I reckon I contributed a pretty big portion of street names on the north side of Brissie, by riding around on my bike with my Nokia 6120c (great phone!) and a bluetooth GPS dongle, recording all the points of interest like a pro, to upload to the map when I got home.

It was a great hobby at the time, when vast swathes of Australia were completely blank. Now OpenStreetMap is pretty feature complete, it’s used everywhere.


A short history of maps as a web developer

Back in those days the state of the art for web mapping was the tile-based “Slippy Map”.

Everyone used it, even Google Maps. You’d essentially have a Javascript frontend to let visitors zoom and scroll the map like you do today. But on the server a process would convert all the OpenStreetMap geodata into standardised image tiles (raster tiles).

Tiles were commonly created at 256×256 pixels, and were rendered at zoom levels from 0 (the whole world in one tile) down to zoom level 19 where the world would take up 274.9 billion tiles.

A map of Australia and surrounding nations, split into a 256 pixel grid

This was generally an on-demand process as rendering so many tiles would be infeasible. Ridiculous. Absurd. I can tell you this because I tried a couple of times. Not for the whole world, but a few times I’d tried to scrape, render, cache the entire of Brisbane for assorted projects.


Eventually Mapbox came along with an easy-to-use interface on top of the open source data, and reasonable enough pricing to make it worth switching over.

I gave a talk a decade ago about the cool stuff people were doing with maps, and that included plenty of Mapbox evangelism.

Later Mapbox standardised the Mapbox vector tile format which had a lot of benefits over the older raster tiles.While a raster tile could be styled to look however you want on the server, a vector tile could be styled on the client-side. That meant the same tile could power a hundred different map styles, even dynamically on the client-side. In addition, vector data makes things like animating between zoom levels look great. Generally, a huge step forward.

The new OpenGL map library was released to take advantage of these benefits and it unlocked a lot of really high quality maps for the masses.

By this point high quality maps were par for the course and radical innovation in the space kind of flattened out.

My opinion of Mapbox turned when they went the way of every venture backed startup; got involved in union busting, closed-sourced their tools and started turning the money dial up. 

That’s when I started playing with maps again.


Cycling maps

Since at least 2010 I’ve maintained briscycle.com in some form or another, and always one of the main features has been maps to show safe routes and how infrastructure connects up.

I’ve gone through phases of running my own tile server, using statically rendered tiles, and third party map services including Mapbox (who can’t do very good cycling maps fyi). But recently I figured I’d go back to rendering my own.

I don’t remember where I spotted tilemaker, but it has such a sweet looking website that it inspired me to have a go at building my own vector tiles. It wasn’t as easy as the website led me to believe, but after lots of trial and error, some coding in lua to get the right properties out, I managed to get a decent looking cycling map out of it.

A map of Brisbane. It's fairly desaturated, except for the green cycleways and bike lanes everywhere.

I largely followed the instructions from Wouter van Kleunen’s how-to blog post, then:

  1. extended it by customising the lua processor to pull out more cycling attributes (and skip attributes I wasn’t interested in.
  2. styled the map using a standard json map style, but I also processed that on the client-side to add more repetitive things like road casings. You can check out the code here. (edit 2024: apparently maputnik lets you create style json in a graphical way)
  3. Set up a small Docker machine to serve mbtiles (dockerfile source)

The result is pretty cool.

It’s very fast because it’s hosted in Brisbane for a Brisbane audience, so the map tiles don’t need to transit the globe before being displayed.

The tiles themselves are optimised pretty well and allow me to tweak the styles in almost real time. There’s still a few weird bits, but I reckon it’s a good base layer to add stuff to, like geojson routes (check out the brisbane valley rail trail).

So that’s it from me. You can check out the map at briscycle.com/map or check out some of the cycling trips in Brisbane for more.

The easiest way to validate email in React

Email validation is one of those things conventional frontend wisdom says to steer clear of, because:

  1. the endless complexity in how email works and
  2. because even if an email looks valid, it might not exist on the remote server.

This is backend territory.

However.

We can hijack the browser’s built-in email validation in <input type="email"> fields to give the user a better experience.


CSS validation

At the simplest, <input type="email"> fields expose :valid and :invalid pseudo selectors.

We can change the style of our inputs when the email is invalid:

input:invalid{
  border-color: red;
}

Nice. Zero lines of Javascript. This is by far the best way to do form validation in the year ${year}.


Plain Javascript email validation

The constraints validation API exposes all of this same built-in browser validation to Javascript.

To check if your email is valid in plain JS:

const emailField = document.querySelector('.myEmailInput');
const isValid = emailField.validity ? !emailField.validity.typeMismatch : true;

This works in all the latest browsers and falls back to true in those that don’t support the API.


React email validation hook

Lol just kidding, you don’t need a hook. Check this out:

const [emailIsValid, setEmailIsValid] = useState(false);

return <input
  type="email"
  onChange={e => setEmailIsValid(e.target.validity ? !e.target.validity.typeMismatch : true)}
  />

We can use the same native JS dom API to check the validity of the field as the user types.

This is by far the fastest way to validate email in react, requires almost no code, and kicks all the responsibility back to the browser to deal with the hard problems.


More form validation

Email validation is hard, and all browsers have slightly different validation functions.

But for all the complexity of email validation, it’s best to trust the browser vendor implementation because it’s the one that’s been tested on the 4+ billion internet users out there.

For further reading, MDN has great docs for validating forms in native CSS and Javascript code.

You should also inspect the e.target.validity object for clues to what else you can validate using native browser functions.

It’s a brave new world, friends.

Why are React PropTypes inconsistently named?

I'm reasonably new to PropTypes in my React code and I'm always messing up the naming.

Sure "number" and "string" are easy enough, but why are "function" and "boolean" in a different format to all the others?

PropTypes cheat sheet

According to the Typechecking with PropTypes article the following types are available:

array primitive type
bool primitive type
func primitive type
number primitive type
object primitive type
string primitive type
symbol primitive type
node Anything that can be rendered: numbers, strings, elements, etc.
element An instance of a React component
elementType An element constructor (I think)
instanceOf(x) An instance of class _x_
oneOf([
  'News',
  'Photos'
])
One of the given values
oneOfType([
  PropTypes.string
])
One of the given types
PropTypes.arrayOf(
  PropTypes.string
)
An array of the given types
PropTypes.objectOf(
  PropTypes.number
)
An object with certain property types
shape({ 
  a: PropTypes.string
})
An object of a given shape
PropTypes.exact({
  a: PropTypes.string
})
An object that exact matches the given shape

Why "func" and "bool", not "function" and "boolean"?

I'm always tripped up on the spelling of "func" and "bool". Mainly because the rest of the PropTypes use full names whereas these two don't.

After asking on Twitter, a few folks suggested it might be to avoid Javascript symbol names

But that still didn't answer the question because while "function" is a reserved token in Javsascript, "boolean" definitely isn't.

Eg. assigning to function throws:

> const function = 'error';
Thrown:
const function = 'error';
      ^^^^^^^^

SyntaxError: Unexpected token function

But assigning to Boolean is totally fine:

> const boolean = 'truly an allowed keyword';
undefined
> boolean
'truly an allowed keyword'
> Boolean(boolean)
true

Further, these tokens are both allowed in an object definition:

> const ParpToots = { function: 1, boolean: 2 }
undefined
> ParpToots.function
1

The plot thickens

I wasn't really happy with the answers I was getting, so I did some Googling.

The search came up empty until I stumbled on this question on the PropTypes Github issue tracker from 2017:

Hi, I've searched a bit in the Readme and in the issues here, I did not find why we do not use Proptypes.function and Proptypes.boolean like we do for object (vs. obj), etc.

Why the shortnames? Are they reserved words? If not, it would be nice to create aliases maybe for these two ones no?

Which was followed up a few hours later with the answer:

Yes, you can't do const { function, bool } = PropTypes in JS because they're reserved words.

Which… is a little more satisfying.

Except we've already shown boolean isn't a reserved word. So what's going on? 🤔

boolean: a reserved word in ES3

Having found the reason why PropTypes doesn't use boolean, I needed to connect the dots. Why is it considered a reserved word?

I eventually landed on the MDN Docs on Javascript lexical grammar which lists the full set of reserved words for Javascript, as well as some previously reserved words from older specs.

And wouldn't you know; there's boolean sitting in a list of "future reserved words" from the ECMAScript Language Specification edition 3, direct from the year 2000.

7.5.3 Future Reserved Words

The following words are used as keywords in proposed extensions and are therefore reserved to allow for the possibility of future adoption of those extensions.

abstract enum       int       short
boolean  export     interface static
byte     extends    long      super
char     final      native    synchronized
class    float      package   throws
const    goto       private   transient
debugger implements protected volatile
double   import     public

The bingo card of Javascript features

Looking at the list there's a good mix of keywords that eventually made it into the spec. const, class, import, all big ticket items.

"boolean", however, was eventually removed from the list and is no longer reserved.

I'm not sure what it would have been for, but alongside "int" and "short" you could wager it was intended to be part of a fully typed Javascript spec.

In fact, peering through history I found a bunch of resources around typed Javascript as early as 2000 (Microsoft had an optionally typed implementation of JScript for .NET 🤯), and there's some fascinating papers from around 2005 that talk about what sounds a lot like modern day Typescript.

Whatever alternate history we avoided, "boolean" is no longer a reserved word. Regardless, it left its legacy on the PropTypes package and many a Failed prop type: prop type is invalid error in our consoles.