I’ve been using Coolify to self-host a lot of my sites, including this one. But it’s not been without its problems.
I’ve noticed a lot of flakiness, including databases disappearing and taking down services seemingly at random. At one point I was unable to log in to any services, including Coolify itself.
Coolify uses a lot of disk space, and when you run out of space things stop working.
Coolify no space left on device, write
I noticed recently that my Ghost blog couldn’t connect to the database, and assumed it was just some general flakiness.
Then while trying to build another Node.js project I received this error:
[13:09:49.288] #8 12.84 npm ERR! code ENOSPC
[13:09:49.290] #8 12.84 npm ERR! syscall write
[13:09:49.293] #8 12.84 npm ERR! errno -28
[13:09:49.298] #8 12.84 npm ERR! nospc ENOSPC: no space left on device, write
[13:09:49.303] #8 12.84 npm ERR! nospc There appears to be insufficient space on your system to finish.
[13:09:49.306] #8 12.84 npm ERR! nospc Clear up some disk space and try again.
I had already resized the Coolify disk and filesystem up to 70gb and it was full again! What’s going on?
Cleanup storage in Coolify
There’s an easy way to clean up storage under Servers ➡ Cleanup Storage.
I hadn’t noticed this button before, but clicking that cleared up 50gb of storage space on my Coolify server and everything started working again.
I don’t know for certain, but I suspect under the hood this is running a docker prune operation to clean up old containers. If you’re unable to log into Coolify and you can’t resize your disk, that might be the next option.
Self hosting with NAT and port forwarding and dynamic DNS is kinda fragile. I’ve been using a very cheap cloud-hosted nginx VPS to forward traffic to my self-hosted servers and it works nicely.
But tonight I set up a ssh tunnel that punches out from my server skipping the NAT, forwarding, and DNS stuff entirely. It’ll dial home from anywhere there’s network so I could even take my server to the park and it should work over 5g.
I just think that’s neat.
I’ve tried to explain a bit of my thinking, and a loose guide for how to set this up yourself. These instructions are for someone who’s vaguely familiar with nginx and ssh.
A typical port forwarding scenario opens ports on each device. When all the right ports are open, traffic flows all the way through from the internet to my self hosted server.
In my example, I have a nginx server on a cheap VPS in the cloud that handles forwarding. That VPS looks up my home IP address using a dynamic DNS service, then forwards traffic on port 80 to that IP. In turn my router is configured to forward traffic from port 80 on to the self hosted server on my network.
It works well, but that’s a lot of configuration:
Firstly I need direct access to the ‘net from my ISP, whereas today most ISPs put you behind a carrier grade NAT by default.
If my IP changes, there’s an outage while we wait for the DNS to update.
If my router gets factory reset or replaced with a new one, I need to configure port forwarding again.
Similarly, the router is in charge of assigning IPs on my LAN, so I need to ensure my self hosted server has a static IP.
A more resilient port forwarding over SSH
We can cut out all the router and dynamic DNS config by reversing the flow of traffic. Instead of opening ports to allow traffic into my network, I can configure my self-hosted server to connect out to the nginx server and open a port over SSH
You could also use a VPN, but I chose SSH because it works with zero config.
In this diagram, the self-hosted server makes a connection to the nginx server in the cloud via SSH. That ssh connection creates a tunnel that opens port 8080 on the nginx server, which forwards traffic to port 80 on the self hosted server. Nginx is then configured to forward traffic to http://localhost:8080, rather than port 80 on my router.
So the router doesn’t require any configuration, the cloud-hosted VPS server only needs to be configured once, and the dynamic dns server isn’t needed because the self-hosted server can create a direct tunnel to itself from wherever it is.
The huge benefit of this zero-config approach is I can move my self-hosted server to another network entirely and it will dial back into the nginx server and continue to work as normal.
How to set up a nginx server to forward to a self-hosted server
Putting an nginx server in front of your self-hosted stuff is a good idea because it reduces your exposure to scary internet risks slightly, and can also be used as a caching layer to cut down on bandwidth use.
In these examples, I’m forwarding traffic to localhost:8080 and 443 and will set up a SSH tunnel to forward that traffic later.
There are two ways to set up forwarding:
As a regular nginx caching proxy:
This is a good option when you want to utilise caching. However you’ll need to set up your letsencrypt certificates on the server.
This method is easier for something like Coolify that deals with virtualhosts and ssl for you, but the downside is that there’s no caching, we can’t add an x-forwarded-for header, and it eats up an entire IP address. You can’t mix a socket forward with a regular proxy-pass.
The -R argument opens port 8080 on the remote server, and forwards all traffic to port 80 on the local server. I’ve included two forwards in this command, for both http and https. The 127.0.0.1 address binds traffic to localhost, so only the local machine can forward traffic on these ports, but you could open it to the whole world with 0.0.0.0.
How to set up a persistent SSH tunnel/port forward with systemd
Then, create a systemd service to maintain the tunnel.
sudo vim /etc/systemd/system/ssh-tunnel-persistent.service
And paste:
[Unit]
Description=Expose local ports 80/443 on remote port 8080/8443
After=network.target
[Service]
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/ssh -NTC -o ServerAliveInterval=60 -o ExitOnForwardFailure=yes -R 8080:127.0.0.1:80 -R 8443:127.0.0.1:443 root@myNginxServer.au
[Install]
WantedBy=multi-user.target
You can then start the systemd service/ssh tunnel with:
# reload changes from disk after you edited them
sudo systemctl daemon-reload
# enable the service on system boot
sudo systemctl enable ssh-tunnel-persistent.service
# start the tunnel
sudo systemctl start ssh-tunnel-persistent.service
My observations using SSH tunneling
If all is working, those steps should now be forwarding traffic to your self hosted server.
Initially this was difficult to set up because the vagueness of the docs for whether to use -L or -R, but once it was running it seems fine.
The systemd service works well for maintaining the connection and restarting it when it drops. I can reboot my nginx proxy and see the tunnel reestablish shortly afterward. My high level understanding is that when the tunnel breaks after ServerAliveInterval=60 seconds, the ssh command will realise the connection has dropped and terminate, then systemd restarts the service ad infinitum.
You can adjust the ssh command to suit. There’s probably not much point enabling compression because the traffic is likely to already be compressed. But you could tweak the timeouts to your preference.
On a steam train ride with my mum, she starts telling a story of the trains when she was young. So thinking quickly I whip out my phone, press record, and get her to hold it so I can actually record her voice over the background noise.
It comes out distorted to ever loving shit.
So this sucks. I have to go back to the original onboard camera mic but it’s SO loud with all the engine noise, cabin chatter, and clanking in the background. Even tweaking all the knobs, you can barely hear mum at all.
Are there any AI tools to isolate voice? I remembered I’ve been using Krisp at work to cut down on the construction noise from next door. Maybe if I run the audio through that…
So I set the sound output from my video editor to go through Krisp, plug in my recorder, and play it through. It’s tinny, it’s dropped some quieter bits, but it’s totally legible! Holy cow.
Now I’ve got an audio track of mum’s voice isolated from the carriage noise. I can mix it back together with the original to boost the voice portion and quieten down the rest. This is kinda a game changer for shitty vlog audio.
This is a pretty convoluted workflow, so it’s really only useful for emergencies like this. But I’m really happy that it managed to recover a happy little memory. And I hope one day Krisp (or someone else, I don’t mind) release either a standalone audio tool or a plugin for DaVinci Resolve.
As an aside, the Google Recorder app is officially off my christmas list. Any recommendations for a better one?
For a while I’ve been wanting to update my website, but I’m really not a designer and I knew any attempts to improve on what I already had would be a haphazard mess.
I was looking for a new job as a React developer and really wanted to hone my skills, so I thought what better way than to build a new site in React?
As for design… why not pay homage to one of the most influential operating systems of my youth: Windows 9x. And for fun, why not make it all fit on a floppy disk.
The rise of retro nostalgia
Windows 9x is the loose name for the operating systems from Windows 95 through ME. They were pretty shoddily built on top of MS-DOS and kinda sucked. But they were revolutionary at the time, and we didn’t know better.
The design aesthetic, particularly in the Windows 98 era was something to behold.
In present day, retro tech is really making a comeback. One of my favourite examples of this is Paul Verbeek-Mast’s horrible excellent website which was kind of an inspiration for me through my design process.
But there are plenty of other amazing examples of retro nostalgia including the gorgeous poolside.fm streaming radio, and this fun game concept:
Ultimately the entire site is designed to fit on a 3.5″ floppy disk, attached to a Raspberry Pi running nginx, sitting on the shelf under my TV.
That means the entire site is 1.44 mb (or less) at any given time, and served to you straight from the ’90s.
The site is using Hexo to render out the static content, which includes a bunch of custom theming to make the data hook together nicely.
It’s also using Netlify for builds and Cloudflare as a CDN, so chances are you’ll never actually have to wait for the magnetic drive to spin up. But you never know! I get a little thrill out of that.
Update: this is back on Netlify while I’m at Fronteers Conference since I don’t have time to put the pi back together.
React & open source
This site was largely built with Preact (A fast 3kB alternative to React with the same modern API). The content is built with Hexo then progressively enhanced, so you can disable javascript (with the skip link for accessibility, or in the Start menu just for kicks) and the site still mostly works.
The interface is inspired by the more nostalgic bits of Windows 98 and ME, which were my operating system of choice in my more formative years.
If I’m honest, this was a terrible choice because the (p)react lifting state/render model is not great for large applications like this, and I led myself into an architecture that’s super inefficient and hard to maintain. But at this point I dont care, it’s working pretty well.
The UI components and some of the apps have been released on Github as a library called ui95. It’s a bit rough but you can use the library to create your own sites, apps, or just as a learning tool. Interestingly Artur Bień has been working on a parallel component library of Windows 95 styled components as well, so that’s probably worth a check-out too.
Some apps were built by third parties, including Paint and originally I was planning on including Webamp but it was too big to fit in my size budget. You can check each app for license information.
Where to from here?
Not sure. I’d like to post more on my blog and maybe find a local computer group.
But in seriousness, this was a fun project and I learned a lot putting it together. I hope you get some inspiration out of it and bring back a little of the whimsy in the retro web.
In 2018 I wanted to buy a Google Home because I was working at ABC News on chatbots and figured immersing myself in the voice assistant hype would give me a better perspective on how to create for them.
Maybe I could write an app! Or at least understand better how they could fit into people's lives. I was never especially convinced of the broader applications, but it was cheap enough so I figured it couldn't hurt.
After buying a second for the bedroom and using it for a year and a half, I finally tipped over the edge and given up on the platform for good.
What's so good about Google Home?
Ultimately voice assistants have different use cases for everyone.
One of my friends uses an Alexa for music and managing the contents of their fridge. Another uses it to control home automation (and terrorize the cat).
Personally my main uses were checking the (variable Dutch) weather and asking about the time. The latter actually surprised me, it's super useful when you're in the shower or in the dead of night and don't want to open your eyes to look at a clock.
The problem was that aside from turning on and off my lights, it really wasn't doing much for me. I'm not interested in radio, could never get podcasts working, and the news is easier to read online. On top of that, finding apps on Google Home is downright impossible.
I'd say outside of Google's main offering there were no killer apps. It's not a great ecosystem.
Then the bugs
From the get-go I couldn't use the full set of functionality because I have a Google Apps account rather than a plain old Google account.
For a while I could create calendar events, but that feature disappeared unceremoniously one day. I couldn't send messages, dictate emails, or receive notifications and there was no real integration with any major Google features. I'm not sure how much of this was due to my Apps account, and how much was just missing features in general.
But the thing that frustrated me the most was the reliability of the system. In the past few months it seems to have completely tanked.
At various points I've had the assistant light up and start listening for no reason at all, switch to another gender and accent, and more recently it stopped recognizing devices on my network like my TV and smart lights.
As an isolated event this was frustrating, but the frequency it was happening killed my faith in the system.
Setting alarms, weather updates, setting timers when cooking. Also telling it good night so it plays soothing sounds for me.
One of the biggest sticking points for me was that for Google Assistant to work, you needed to enable Web & App Activity on your Google account.
This is an all-encompassing feature that logs all your interactions with Google, including searches. It's not just for your voice assistant.
I was initially hesitant to turn this on because it's super creepy having your Google searches stored in perpetuity, especially when dealing with sensitive or embarrassing topics. But I did it because I wanted the hardware assistant to, you know, actually work.
While you can delete Web & App Activity from the My Activity site, it was still kinda chilling and I started using a lot more tools like Duck Duck Go, more private windows, and Firefox Focus (a private browser for mobile which I highly recommend).
But last month after a period of Google Assistant constantly misunderstanding, getting things wrong, and at one point playing loud rock music instead of white noise in the middle of the night, I decided to turn off Web & App Activity and see what happened.
Using Google Assistant without Web & App Activity enabled
Spoiler: not much changed when I turned off Web & App Activty, which surprised me a little.
When I first started using the device this was a mandatory feature, and it wouldn't work without it. But it seems they've some done work on making the hardware devices work without logging enabled.
That said, it wasn't perfect. I lost access to my third party integrations such as LIFX lights and Chromecast controls so it wasn't a complete solution and made the devices pretty useless beyond just the time and weather.
Turning lights on/off, playing music, setting timers. Trying to make it say funny shit…
So my two Google Home Minis have been sitting on my bedside table for the past week and I'm not sure what to do now.
I like the idea of voice assistants and would consider getting another in the future, but right now the Google Assistant isn't very useful. I'm not a big fan of Amazon, and I hear Siri is pretty useless too so I don't hold out hope on things changing any time soon.
From my brief question on Twitter, it doesn't seem like many others are using them for much more than basic tasks. While race to the bottom in terms of price and smartphone ubiquity has contributed to these things being in everyone's homes, I genuinely wonder where the voice assistant revolution is heading from here.
When I was at school the mobile phone went from a luxury item to something that was affordable and even kinda essential.
I remember that Nokia cycled from orange backlights all throught the colours of the LED rainbow until eventually you could get them in white, which was really fancy among the kids at school.
I don't recall which came first for me: IRC or ICQ.
But either way, back in the early 2000s when text messaging still cost 25 cents a pop, the freedom and possibility of being available 24/7 was super exciting (as long as the dial-up connection hadn't crapped out).
I'd stay up all night just chatting to people, and for an antisocial teen it was an amazing enabler.
But there were too many apps
Chat networks sprung up like weeds, everyone was writing them. Even I had a crack, which is to say played around with Visual Basic building what was a terrible (but in my memory quite slick looking) GUI app.
Eventually folks came up with the incredibly smart idea to reverse engineer a bunch of chat protocols and build an app that could connect to them all at once.
As the first of many apps to do this, I loved Trillian to bits (but as a skeezy high school student never managed to pay for it). Eventually I moved to Linux and Pidgin was good enough, so I ended up using that for yeeeaaarrrrss before it fell into obscurity.
Obscurity because after a period where all the disparate chat apps started federating and talking to one another, all the old guard died off and got replaced by a new and vastly less interoperable bunch of chat apps.
Which is where we are now.
Chat apps suck, and I'm so over it 🙄
Facebook (and by extension Messenger) have been tarnished by the stink of their inappropriate data collection and sharing. A lot of my friends use Instagram, but I doubt anyone is going to trust Facebook again.
Slack just shut down their IRC/XMPP gateways leaving you to use the slow, bloaty Electron app that barely works. No joke, I have to close it whenever I boot a VM because it uses so much RAM.
Twitter is trying their hardest to destroy all the goodwill of the early adopters by plastering ads and featured content in the timeline and push notifications, while at the same time killing third party API support. This is a burning platform, and I'm done with it.
Whatsapp is just plain ugly. I have at least one friend boycotting it at the moment, otherwise this would be a contender for the messaging platform most of my friends use.
Signal security is questionable at best, with at least two exploits this year that I know of, and a frustrating dependence on Electron. I've had so many issues with the Android app in the past, I don't think it's worth anyone's time.
iMessage only works on iOS and MacOS. It's the height of arrogance.
On top of these, the remainder of things I've used in the past couple of months are Discord, Hangouts (lol), Microsoft Teams, various hook-up apps (I'm only human), Skype, Meetup, and of course regular trusty old text messaging. There are jsut too many things.
I don't even know
Twitter has been my go-to messenger for a little while now. I wanted it to be the universal SMS of the Internet, but right now they're more focused on trying to be the World Cup news hub (side note: exploding head 🤯 should really be a ligature so you can join it with other emojis such as rolling eyes 🙄).
The other day I cracked it and uninstalled it from my phone. Which leaves the question of which trade-off messaging app do I use to talk to my friends?
So despite being burnt by buggy Android group texts as recently as last month, I'm just about ready to go all-in on SMS. Relive the glory days of standards and interoperability with a service I'm paying for.
So, you know. Send me a text. (International fees and roaming charges may apply)
Set over a month-long period, the js13k challenge is to create a compelling game within 13 kilobytes. This doesn't sound like much, but you can do a lot with it if you get creative.
Last year I dreamed fairly small, with a space invaders clone set to polar coordinates. I had fun making it, but it wasn't especially playable and this year I wanted to do something completely different and come up with a mobile friendly touch-based game.
The Concept
The concept had been rattling around in my head for a while. I've a very soft spot for city builders, and I love isometric art. While an outright city builder probably wouldn't be feasible, a subset thereof might just work.
My main inspiration was an old DOS/Windows game called Pipe Dream, in which you're given an infinite number of puzzle pieces and need to connect the start pipe to the edge of the map somewhere before the liquid spills out. Mash this concept up with the city building theme and you've got a basic connect-the-infrastructure sim.
When the theme of the jam came out as “reversed” I really struggled to find a way to align the concept with the theme, so I had to mix it up.
Ultimately I landed on the puzzle-style "arrange the pieces to fit" theme, in which you need to strategically destroy or reverse the order of the tiles as they come off the queue. It's a little less "world buildy" and a little more constrained, but I think the gameplay works, and it gives me the opportunity to expand the game with new puzzles in the future.
The Tech
The tech is also an idea that's been bouncing around in my head for a while. I've long been a fan of isometric art (pixel art as well as more recently isometric vector art and voxel art) so I wanted to do something in that style.
I had a play around in CodePen and came up with a crazy simple plan; draw everything in voxels. The algorithm to draw an isometric cube isn't all that difficult. You've got three visible sides, each side consisting of four lines and a fill colour. Turn that into a function and you can start making some cool stuff.
You can adjust the sliders on the example above to modify the cube.
Drawing voxel sprites
With the idea down, the next step was to draw a game world. There's a bunch of sites out there that have isometric code examples. I had Clint Bellanger's isometric math tab pinned in my browser for a good week, though in reality there were only two functions I really needed:
Convert isometric position to screen/pixel position
Convert screen/pixel position to isometric position
These two functions let me both draw isometric boxes to the appropriate location on the screen, and then detect where those boxes were stacked when the user interacted with them.
From there, the trick was to come up with sprites made from boxes. While it was possible to sort of work it out in your head, this was often a trial-and-error process, placing boxes and seeing where they land in the output.
Each item in the array corresponds to a coordinate. In this case the z, x, y coordinates, x width, y width and height.
In the end, my sprite list resulted in a bunch of arrays and functions that don't make a great deal of sense to the naked eye but render cute little box sprites in the game engine.
Putting it all together
With a bunch of sprites at my disposal, all that was left was to implement the game! Easy, right?
I actually took a week off specifically to work on Road Blocks, and I'm glad I did. Through the week I implemented the engine various features (some of which didn't make the final version) and tuning interactions.
Because I wanted this to be a first-class touch game, I implemented everything mobile first. This was a fantastic way to discover all the limitations of the platform at the get-go rather than having to refit a desktop game to mobile later on.
I also spent a lot of time user testing the game, be it in person or through analytics. Testing is a really important part of any gamedev process, and it was really enlightening to watch people playing my game for the first time. Many a puzzle or interaction was tuned based on feedback and watching people work out a level.
I also released my game via Twitter as a public beta to gather play statistics and weed out any errors that might crop up. I used Loggly to record a bunch of custom game stats and events, and the results were quite valuable in determining difficulty and how people were faring playing the game.
One particularly revealing fact was that most people were getting stuck on a particular level. Armed with feedback from testers and the hard facts of the analytics, I tweaked it to be not quite as difficult and pushed it from the middle of the campaign to the end to make it a sort of "final boss" level instead.
By my analytics most testers unlocked "free play" but not so many used it. Level 7 seems to be the toughest, only 4 people solved it. #js13k
As a side note, one day I got distracted and came up with a data encoding scheme for level data which helped reduce the file size in the final zip.
Conclusions
I'm really pleased with the results this year. The game clocked in at just under 13 kilobytes and made efficient use of the space.
Some notes in hindsight:
Levels and sprites should probably use a binary data scheme to reduce file size and allow the use of Web Workers.
For some reason HTML5 canvas makes the CPU spin up like crazy. I'd like to get to the bottom of this sometime.
User testing is mandatory. When developing something in isolation you can get into a weird headspace and not notice the obvious stuff.
I had initially intended to have little cars animating along the roads, but ran out of space and couldn't think of how best to do this. I need to learn more about vectors in gamedev.
Ultimately, this year's js13k was a whole heap of fun and I'm really proud of my result. There's a bunch of awesome entries that you should check out, and you should consider entering next year.
Further reading
You can play Road Blocks on the js13k website. You can also check out:
I didn't go into this weekend with a project, but I woke up Saturday with an idea I couldn't get out of my head — I want to write a music sequencer with a really low footprint for use next month in the js13k game competition.
I've written about js13k before, and took part last year. This time around I want to be a bit more prepared, and I wanted to make a tool that would make it easier for the community to make cool stuff!
So this weekend I've been working on a bunch of different tools to make this project a reality.
Mini Sequencer
Mini sequencer is exactly that: a mini sequencer implementation that can play sounds at various times to form tunes.
This was my first mini project, as I was interested to see what the performance of web audio would be like; it's surprisingly good. That said, if I get time I'd like to look into replacing it with the Web Audio API, as it's a lot less hacky and should perform better.
jsfxr is a little 8 bit synth which was implemented a few years ago for use in the js13k competition.
Since this is probably what most of my sounds are going to be implemented by, I wanted to be able to create new sounds from my sequencer. While there's a few sites out there (my favourite by Super Flash Bros) that let you adjust sliders and make new sounds, there's not actually an out-of-the-box tool you can use to plug into your own project.
So after a bit of reverse engineering of as3fxr (the original Flash version), now there is.
This actually took up a whole bunch of my time, and if I were a project manager I would have dropped this to focus on other stuff, but hindsight, right?
I got to the end of the weekend and felt like I hadn't really ended up with much to show off.
The timeline was one of the big things I'd been putting off doing because it's slightly weird and I wasn't quite sure how to tackle it, so I went all-out and implemented a standalone component (depends on jQuery but probably doesn't need to).
I'm pretty proud of this one, it's styled reminiscent of the old Fruity Loops sequencer and just looks a bit retro.
So despite having made a million things this weekend, I haven't actually finished the project I set out to do. Right now I have:
Create & manage a library of instruments (with jsfxr editing built in)
A super rudimentary timeline (edit some JSON by hand and the music will update)
BPM adjustment.
Things I need to do from here:
Plug in the actual fruity timeline so you can edit your song visually.
Implement a "piano roll" feature so you can have different pitches of the same instrument.
Stop/play/seek functionality
Export your file
Load up proper audio files (mp3/wav/whatever) so you can play with those too.
It looks like there's a lot there left to do, but I think I'll be able to get a minimum viable product done with another weekend. I'm not sure if I want to publish the code yet since it's a massive pigsty, but I'll aim to get something out before next weekend is through.
Edit: Ended up getting something together Sunday night. Try it out.
The concept is fairly simple: make a game in under 13 kilobytes with no external dependencies. The idea is to see what people can come up with on a budget, and it’s awesome to see some of the entries this year.
My game, Polar Defender, is a basic shoot ’em up on a polar coordinate system. It’s heavily inspired by space invaders, except you have to defend various planets from all sides at once. It’s heavily reliant particles and a basic polar trajectory system to provide messy, explodey space fun.
The theme of “elements: earth, air, fire, water” is optional in the contest, but I incorporated it into my level system (an earth-like planet, water planet fire and gas planet). It’s a /little/ contrived, but I think it works well in terms of playability.
I wanted to include a playable level system with a playful narrative since there’s only so much you can do in 13 kb and I felt it would make it a more personal experience. I feel it worked out well, with six levels (including an initial training level) on various planets and varying degrees of difficulty. After early feedback stating it’s too hard to finish in one go, I adjusted the menus to make each level unlockable rather than having to start over, which really improves the gameplay in a casual sense.
Touch input is significantly more difficult than desktop input because I essentially shoehorned the same concept in where it doesn’t really fit. If I had the chance to do it again I would introduce a separate tap-based firing system on mobile.
13 kilobytes is quite a lot in terms of raw code, but also a challenge to meet when including graphics, sound, polyfills and other boilerplate.
Minification of Polar Defender was done by hand and involved a lot of code tweaks.
The ultimate deliverable needed to be compressed into 13 kilobytes of zip file, which is roughly comparable to a gzipped distribution from a web server.
Some of the things I did which aren’t necessarily best practices include:
Strip unnecessary properties and pre-compile SVG files into a JSON file to be bundled into the main JS build process. This improves compression because there’s less junk and the SVG gets compressed in with the JS which presumably improves duplicate string elimination in the zip format.
Collapse JSON structures into CSV-like strings that can be reinflated later. JSON objects are super-wasteful in terms of repeated properties, and while compression algorithms are generally pretty good with repeated content, it’s still better to remove the duplicates where possible.
Globalise commonly used functions. This isn’t something I’d usually recommend but considering the constraints what the hey. Things like aliasing window to w and Math to m reduces byte-level repetition. Additionally keeping everything in a local scope lets Uglify optimise away long function names.
Loose comparison and other sneaky tricks. For instance using 1 for true and 0 for false saves 3 bytes per bool and works in a loose JS equality operation if you’re prepared to ignore JSHint complaining a lot.
Reuse everything. I reused a basic set of drawing functions and sprite classes for everything in-game, meaning each new feature was an iteration on an existing one rather than a completely new piece of functionality. See also entity component system on Wikipedia.
Further reading
In addition to my jS13k entry, I’ve got a side-build available in the Chrome Web Store which you can install and carry around with you. The main benefit is that your scores are stored in the cloud and unlocked content goes wherever you do.
Overall I think it worked quite well and I’m happy with the result. There’s some awesome games submitted so far and I can’t wait to see how everyone goes.
Use drawImage(img, x, y) to splat any image or canvas element down into your current context.
var canvas = document.getElementById('c');
var ctx = canvas.getContext('2d');
// load an image and draw it in!
var img = document.createElement('img');
img.onload = function(){
// Draw the image
ctx.drawImage(img,0,0);
// and some text
ctx.font = 'bold 40px Impact';
ctx.textAlign = 'center';
ctx.strokeStyle = '#fff';
ctx.lineWidth = 2;
ctx.fillText('MEMES AT #brisjs',200,40);
ctx.strokeText('MEMES AT #brisjs',200,40);