tivac.com

RIP Crucible

So... that happened 😬

This is really disappointing turn of events. I started working on Crucible over 3 years ago and felt like it had so much promise, so it extra-hurts seeing how things ended up for the game we all worked so hard on.

While I still remember how things work I want to document how the Crucible UI works at a technical level. How it's built, how we assemble the pieces, how the logic hangs together, etc. There's nothing brand-new or necessarily radically different here but there is a lot of interesting and somewhat unusual technical decisions/approaches.

Overview #

At a high level Crucible's UI was two single page applications, the Main Menu and the HUD. Each was a Svelte application with most of the logic around what components to draw at any given time driven by a statechart implemented using XState. The code was authored as ECMAScript modules and then assembled using rollup with a variety of community and custom plugins. All the styling for the UI was driven using CSS, authored against an extended version of the CSS Modules spec called Modular-CSS.

We built multiple custom tools to help us maintain our code standards and quality level across the JS and CSS that comprised the application, as well as making heavy use of ESLint and StyleLint with both community and custom plugins for each. The app was tested on every commit in the continuous integration environment using a test suite built on top of Jest and running all the tests against a headless Chrome instance provided by Puppeteer.

Using a web browser to draw game UI #

Building game UI using a web browser is still an uncommon choice, it's not unheard of but it's definitely not the go-to technical choice. There's a bunch of compelling reasons to do it, alongside more than a few potential pitfalls, and I'm extremely proud of the work we did on the Crucible UI.

Never Stop Building #

One of the most important aspects for us of rendering game UI using a browser is that if you do some work up front you can run your UI without needing the game at all. I pretty early-on made the call that I always wanted to be able to run the entire UI outside of the game. This was based on spending about a day working with the prototype version of Crucible we had when I started on the project and was a choice that I'll always look back on proudly. Choosing to factor that in to the technical decision-making process early helped guide a lot of the rest of the project.

The UI running outside of the game in standard desktop Chrome meant that while the rest of the team was busy getting the game set up the UI team was already able to iterate on techniques, tools, and features. Without being able to build using a browser the UI team would've been miles behind the rest of the project. Running the real UI in the real game was always the gold standard but introduced enough complexity to the workflow that we often had to build in the browser and then later test in game. It wasn't the approach we wanted to take but sometimes these things are dictated for you by circumstances out of your control.

Coherent GT #

To make this happen Crucible used a product called Coherent GT from Coherent Labs which is an embeddable version of WebKit that is specifically intended to be embedded into game engines. The decision to use Coherent GT for UI was made before I started on Crucible but I'd had experience with it's precursor, Coherent UI. Guild Wars 2 used Coherent UI and we'd always wanted to switch it over to Coherent GT since it runs the browser instance in the same process as the game. This prevents a lot of the resource starvation issues we saw in the child-process model that Coherent UI used.

The version of WebKit Coherent GT runs is a bit on the older side but supports a lot of the functionality you'd want for modern web dev. ECMAScript 6, ECMAScript Modules, CSS variables, import(), Promises, Media Queries, fetch, flexbox, CSS grid, etc are all present and mostly working. There's sometimes issues in the implementations of those features (that have presumably been since fixed in Webkit) but Coherent's modifications to Webkit probably make keeping it up to date pretty expensive. The only really surprising feature we missed from desktop browsers was the spread operator in object literals ({ ...foo }). Since it wasn't supported we configured ESLint to ban its usage and instead fell back to the older Object.assign({}, foo) form. We could've transpiled this with Babel but since we had no other transpilation needs on the project I made the call to not introduce Babel into our build and instead ban object spread via ESLint. This saved time on every single build every engineer did and while a minor adjustment was the right choice as we tried to keep our iteration speed high.

C++ & Coherent communication #

I worked together with Bob Rost to define the system we wanted for game <-> UI interaction that came to be known as "UI Endpoints". We built a plain REST-ish implementation using the custom game engine hooks provided by Coherent. The one thing we could do that is harder in an actual web site is get multiple responses pushed back to the UI from the game engine so we could track values changing over time. The UI was responsible for instantiating connections across various endpoints that would accept params and return information about various parts of game state. The UI having responsibility for opening the connection was and important decision. During early tests of the existing Coherent integration you had to reload the entire game map to see UI changes because the game pushed info to the UI whether it was listening or not. If you reloaded the UI it didn't have the complete state and generally exploded in completely bonkers ways. Letting the UI be in control of requests and ensuring that it always got the current state of the game info when it opened a connection meant that UI Engineers could make changes and hit F5 in the attached Coherent Debugger to see almost-instant UI updates and it'd be able to start up as if from scratch each time.

These UI Endpoints had an API contract that we defined in markdown files. This decision led to a ton of questions from non-UI Engineers about how it enforced type-safety (it didn't) or whether they could generate C++ code from it (they couldn't) but what it did give us was a very expressive and human-friendly way to propose and agree upon API contracts. There's probably a different solution that could've solved for more of the things that the API implementers wanted but we were able to at least mollify some of their concerns by providing better tooling & instructions around how to document their endpoints. There was a small build process I created that would slurp all of the various endpoint .md files and convert them into a single large HTML page (because ctrl+f is super-valuable for APIs) that was used as the reference for both UI Engineers and endpoint implementers. Having a HTML version of the docs that could be linked to whenever folks had questions about a specific endpoint's API ended up saving me a ton of time throughout the project.

To facilitate this custom communication layer and browser development I had to completely rebuild the JS implementation of the engine <-> UI communication layer during our initial integration. We ended up with a streamlined version of it with hooks that would allow us to pretend to be the game engine from browser-land. The Coherent-provided version came with a non-standard implementation of Promise and a lot of functionality that we didn't need so taking some time to fully understand the undocumented APIs it was using and create our own implementation was extremely worthwhile. The first iteration came together quickly over a few days, though as we learned more about usage patterns and how we wanted to write tests it would continue to evolve several times throughout the project.

Faking it #

The first iteration of this mocking layer allowed for adding either a "persistent", "static", or "dynamic" mock for a particular endpoint. A persistent mock would always respond with the same info any time a new connection to the endpoint was opened. Static mocks would send a response to any existing listeners a single time, and dynamic mocks were passed a callback that received the parameters sent when the connection was opened and could decide how to respond. We eventually phased out the static mocks when we shifted our testing approach but both the persistent mocks and dynamic mocks were valuable throughout the project. Dynamic mocks in particular were useful because they could reach into fake persistent data like the preferences system. That gave us the ability to have completely usable & accurate preferences mocks in the browser and fully build & iterate on the preferences UI outside of the game client.

Controlling all these mocks was something that we didn't really have a great solution for. Originally we sort of mocked... everything, and expected engineers to change the mocks in the source files that they needed to represent a particular setup. This wasn't ideal because it required a complete rebuild of the UI code and also required knowing a lot about endpoint names and response object shapes. This also fell apart even more rapidly as we started building UI for multiple game modes and game states.Eventually as part of the deprecation of static mocks we introduced a system that we called "URL Flags" where we allowed for configuring the installed mocks by passing different anchors values. This would look like menu.html#fromgameplay&rewards which instead of the default menu experience of booting up a client for the first time would instead install mocks that told the UI that it was returning from a match and should show the match rewards screen. Or hud.html#character=duelist&mode=practice to see the HUD like you're playing a duelist in practice mode. These could only be changed by entering them and the reloading the page to avoid weird edge-cases with stale mocks but even still it was a huge upgrade to the workflow.

Even this approach wasn't perfect, a noticeable weakness of this approach was discoverability. That was true both for end-users as well as for the folks editing the mocks. You could only discover what flags were available by doing a slightly arcane regex search across the entire project which isn't a good user experience even whe you know regexes reasonably well. It worked ok for me but fell apart for folks who didn't know the dark magics of regex or want to approach things that way. Nissy Newlun built out an entire debug console that could slide out from the side of the screen & provided discoverability of all the various endpoints and mock info. It was a way better approach that we never quite had time to get committed and do enough work to ensure it didn't accidentally end up in the shipping UI.

Testing #

I've never been able to get an actually-worthwhile test suite running for any project I've built before. That feels... really bad to say, but it's honest and I think it speaks to the challenges inherit in testing web applications with server backends. Since this project was entirely intended to be run within a game engine the scope of the external data that could be injected into was significantly smaller. That fact combined with the work we were already doing to allow for development outside of the game made for the perfect conditions to actually have a test suite. This was another one of those super-pivotal decisions that impacted the entire rest of the project. It seems like a straightforward choice but it wasn't a guaranteed thing and took a reasonable chunk of time to get right and keep running.

Jest #

I was already familiar with using Jest from some of my open source work like Modular-CSS and was really happy with that experience so we didn't have to look very hard for a test framework. Figuring out how to connect Jest to a browser for testing an entire webapp was a bit more of a journey though. Jest comes with a built-in JSDOM implementation but we quickly ran into pretty severe edge cases with using jsDOM. Most noticeable for our project even in its nascent state was a total lack of support for CSS variables and inline styles. We had planned to heavily use Jest's snapshot feature to take a snapshot of the DOM state of particular components and the lack of being able to inspect inline styles meant it was dead in the water. I'd seen a bit of news about Puppeteer as a very recently-released project and gambled a bit that its APIs and smart integration with headless Chromium would pay off.

Spoilers: it did.

Puppeteer #

We started with a hand-rolled implementation that would start up Puppeteer whenever a Jest test run started but eventually the pattern was packaged up nicely as jest-puppeteer and we were able to seamlessly able to move over to it to take advantage of having a maintained system instead of something I put together quickly. My approach worked, but using OSS was a way better choice here. We decided early on that having unit-test level tests for components was much less useful for us compared to integration-level tests where the entire Main Menu or HUD were running at once and we were then snapshotting only the components we cared about. This approach required extra work, especially in the first iteration of our testing story, but also meant that we caught cross-cutting issues much faster if a component was misbehaving. Having component-level unit tests as well would've been nice but that wasn't an option given the time-frame and resources the team had, so we went for the better bang-for-our-buck of integration tests.

Using Puppeteer was generally great but we ran into a few places where we kept on getting ourselves into trouble. One of the biggest sources of failing tests was that Puppeteer methods like .click() or .hover() expected to find an element already in the DOM. If the build node you were on was particularly bogged down due to also compiling the game or something the UI tests could run at less than half of their usual speed and elements would take much longer to appear. If you didn't guard the interaction with a page.waitForSelector() call first you had a 15-20% chance of the call failing and blowing up the entire build. After living with this for a bit and trying to guard all the interaction code we could it was finally deemed untenable and we instead wrote some local wrappers around .click(), .hover(), and also a utility method we called .get() that would grab the .outerHTML of an element. All of our wrappers waited for the selector to exist in the DOM by default and so removed an entire class of test failures by their introduction. Like most systemic issues we ran into a custom ESLint rule was added to enforce using only the safer helpers instead of the base functionality.

It's a bit of an edge case but we ran into this more than once so it's worth pointing out. Puppeteer's .click() method doesn't actually click on an element, it clicks on the center of the elements coordinates. More than once when using .click() we'd have those clicks start not doing anything after a seemingly unrelated change. After a whole lot of debugging with Puppeteer running Chrome in headed mode (thanks to Nissy that was a slight change of the test command, npm run test:visual) we realized that due to z-indexes another element was overlapping the center of the element we were trying to click on. It could even be a transparent element, didn't matter. Once that element was overlapping if it wasn't a child of the element we were trying to click on the click would disappear into the ether and you could lose so much time to trying to track down why.

Checking the DOM #

jest-puppeteer solved the issue of getting Puppeteer & a web server started up before tests and helped us to ensure we had a clean page set up for every test but the testing functionality it added wasn't really what we were looking for. We ended up using a small set of custom Jest matchers that provided some key functionality. expect(selector).toBeInDOM() would wait up to five seconds for an element to be in the DOM and supported the expect(selector).not.toBeInDOM() negation that waited five seconds for the element to not be in the DOM. expect(selector).toMatchDOMSnapshot() used a combination of jest-snapshot and jest-serializer-html along with snapshot-diff to wait for an element to exist in the DOM, capture its .outerHTML, and compare that against the stored snapshot. expect(beforehtml).toMatchHTMLDiffSnapshot(afterhtml) was also useful and used diffable-html to nicely format the returned HTML before diffing it and comparing that diff against the saved snapshot of the previous diff.

There's one other approach that we used that was pretty successful, but a bit more subtle in how it worked. I added a new matcher called expect(selector).toReactTo("endpoint", { data }), this would grab the HTML of the element matching the selector from the DOM, run the mock specified, then check the selector and wait for the HTML to change from what it found previously. This allowed us to remove another large source of timing issues in our tests where we'd run a mock and capture the element's HTML for diffing, but sometimes the element hadn't actually reacted to the new mock data yet because of general slowness.

How to fake the entire world #

Fake data in tests came from the same mocking layer that we used in the browser, although unlike the browser in the first iteration of our tests was that there was nothing mocked automatically. If you wanted an endpoint to return, you had to mock it. This came from a place of wanting to avoid having mocks in ambiguous states and enforcing that the environment was exactly the way you had written it to be. In practice this made writing tests slow, error-prone, obtuse, and I think everyone on the team but me hated it. It also made tests way more susceptible to timing issues on the build server and no one likes debugging tests that fail only on the build server. I'll be the first to admit I was wrong on that one and took my preference for explicitness-over-implicitness way too far in that particular situation. It was one of the biggest discrepancies in what I had built vs what the rest of the UI Engineers wanted in the entire project and I'm glad they called me out on it.

I took some time and went back to the drawing board with an eye towards merging the fully-mocked environment we already had in the browser with the tighter-control we needed for some tests. Eventually it was a matter of reworking how we handled mocks so that they could be removed/updated in the test files and that was really all it took. Wish I'd listened sooner on that one.

The reworking of how the mocks worked in tests also gave me a chance to polish up how we handled "flows" of functionality. Stuff like waiting for the initial epilepsy warning & logo parade on client boot-up that we didn't want to repeat across every single test that tested the Main Menu. This functionality could've been extracted out into a normal function but this presented its own issues. Oftentimes you have tests that want to use part of a flow but not all of it, or a test that wants to change a specific part of a flow to add its own custom functionality. We built a small utility that would take an object with optional named properties corresponding to lifecycle hooks in the function that would then run them all serially. This way a specific test could do something like return an error from a mock partway through the flow without having to recreate the entire functionality. By the end of the project we had about 5 different flows for all the different permutations of loading into the Main Menu and HUD.

Flakiness in tests #

As this document has touched on a couple of times already keeping the tests stable wasn't quite a constant concern but definitely kept rearing its head throughout the project. Another approach we used to try and ensure test stability was the environment flags that we had rollup inject into the build via a small custom plugin. We could use ISTEST in our JS/Svelte code or a CSS media query to change/disable functionality in the application when it was running in test mode. This was particularly valuable when using Svelte transitions due to how they modified the style attribute of elements and could lead to snapshots breaking. Using the ISTEST global value we were able to disable those animations globally while running the test suite and gained a ton of stability at the cost of losing some of the "real"-ness of our testing environment.

Another more drastic approach was to switch from using snapshots of an entire elements HTML to instead checking "does this element exist in the DOM with the particular attributes we care about?" type of test assertions. It's not my preference because it means even tighter coupling between the test code and the component template/styling but sometimes despite our best efforts we couldn't get a test stabilized both locally and on the build server. In those cases simplifying down what the test checked for at the cost of some test file complexity was the right trade-off.

And sometimes we couldn't ever get a test stabilized to the point where we could keep it running so we'd .skip the test entirely. Never what we wanted to do but sometimes pragmatism needs to win out and you can't spend more time trying to clean up something that isn't working.

Core Technologies #

Svelte #

There's faster frameworks, there's frameworks that are more popular, and there's... other frameworks, but we chose Svelte. At the time it was a risky choice but it's proven repeatedly to be a choice that I'm proud to have made.

Svelte was at v1 when I was looking at framework options for Crucible. There's a whole bunch of pre-me backstory to the Crucible UI but I don't have enough context to do those stories justice. What was set in stone at the time was that it was gonna use web tech via Coherent GT and I quickly realized was that we were going to need to build it mostly via UI Engineers. There are probably great WYSIWYG widget assembly solutions out there? I don't know, I've never seen one.

Svelte v1 was very... indie band. It had some attention because it was from Rich Harris who had done some cool stuff with Ractive and had also built out rollup. I followed Rich mostly from Rollup, but was intrigued by this new approach being taken with Svelte. Having a compiler do static analysis of app code and output custom-generated runtime code was very unique in the JS landscape at the time. I'd done enough with Modular-CSS that I was pretty familiar with this sort of process on a smaller scale, and it really resonated with me. It didn't hurt that Svelte beat the pants off most any other framework in speed at the time either.

Picking a framework #

When it came time to settle on a framework (or no framework) the best idea I had was something I ended up stupidly calling "reticles 5 ways". I built the simplest reticle we had in 5 different paradigms. Vanilla JS, Mithril, React, Vue, and Svelte. It was really more like 4.5 ways because I never got the Vue version up to snuff with the rest due to a variety of not-vues-fault reasons, but it was enough to teach me that I wasn't really feeling great about how vue worked. At the end of the week or so it took to build these all out I sat and quietly ruminated about the developer experience of each. I also took broad snapshots of how fast each one was. After a bit of thought and speed comparisons it became clear that Svelte was the right mix of speed and DX for the team I had in mind.

So we built the UI with Svelte. Started with v1, used the tooling they provided to port that all to v2 when it became available in what was the most surprisingly-easy major version upgrade with syntax changes I'd ever seen. v2's changes over v1 were mostly minor template syntax stuff, it took me less than an afternoon to convert. By the end we'd converted everything we built to Svelte v3, and if you know anything about Svelte before v3 and Svelte v3 you'll understand that doing so was an undertaking. v3 changed so much about how components were built and assembled that the official tools to port from v2 to v3 never... happened? I ended up building a translation layer for us so that we could use our v2 stores in v3 components and nest v3 components inside our v2 hierarchy, and imaginatively called it svelte-translator. It's the sort of project that has a shelf-life by definition but it let us upgrade to v3 piece by piece over a surprisingly long amount of time while never having to make a hard break and take the UI Engineering team offline.

Which was the right call because it took us 16 months to fully port everything. I posted a thread on twitter about this, but the wildest takeaways for me where that during the port we doubled the number of components and increased the number of stores we had by 5x. Thanks to the translation layer we were able to convert it component-by-component and start building all new components in v3 so our forward momentum never really slowed down.

Svelte stores #

Svelte also provides a very lightweight data storage/notification method that they call stores. We ended up using stores all over the Crucible UI. They integrate beautifully with svelte templates and being able to combine them using derived stores is a fantastic way to filter and combine data in a reactive way.

Svelte wasn't all upsides though, like any technology choice there were some limitations and issues we ran into. The issue that seemed like the biggest one up front ultimately wasn't such a big deal at all. I had a lot of worries about Svelte's unfamiliarity making it difficult to hire people or onboard them with it once they joined the team. In this end this concern was pretty unfounded as most programmers on the team were reasonably proficient with Svelte around the time that they started to understand the codebase as a whole. It did lead to some situations where using less-common features of Svelte would leave folks scratching their heads for a bit but these were good teaching moments and the Svelte docs/tutorial were usually informative enough to help everybody get onto the same page.

Svelte issues #

One issue we ran into with Svelte was some long-standing bugs in the framework that haven't been a priority for the maintainers to fix. This isn't their fault, all OSS is about trade-offs, but there were some things that caused us repeated pain for months/years. Outro transitions were maybe the most-problematic. Svelte has support for intro/outro transitions (or transitions that play on both) and we continually ran into weird edge cases where components with outro transitions would either stay in the DOM, or throw JS errors when being removed, or prevent parent components from unmounting correctly. These were always difficult to chase down and almost-impossible to create a reduced reproduction of in the Svelte REPL, so it was very difficult to get any traction on them. Fixing them requires a very strong grasp of the Svelte internals and was never something I was able to carve out the time to really tackle.

Fixing bugs in Svelte itself wasn't trivial either due to a variety of factors. I'm a firm believer in using and contributing to OSS but was never able to give as much back to Svelte as I wanted. There were multiple factors that played into this. Svelte's compiler is not small and there's a lot to learn there, it's also written in TypeScript which I'm sure makes it easier to work in for folks used to it but all the type annotations made the code harder for me to read. I was able to get better at this over time as more and more of our dependencies ported over to TS but it never totally went away as a barrier. It also has the interesting issue that the framework is in two parts, there's the build-time static analysis and code-generation as well as the run-time framework. Figuring out exactly where a bug was coming from took longer than I expected it would at the outset due to needing to be able to isolate which part of the codebase the issue lived in and where exactly it came from.

Part-way through our development Svelte added the {#await} template tag which lets Svelte templates be reactive to promise life cycles. It's hugely useful for simplifying waiting for a promise then updating the template but we found it to have some issues and weird interactions with other parts of the framework that can lead to stale elements left on the page or difficult-to-trace JS errors being thrown.

I would absolutely choose Svelte again for my next project, I think all the upsides strongly outweigh the downsides, but it definitely isn't a safe choice and I've had more luck with other frameworks in the past when it comes to fixing issues within the framework. I'd like to grab a few small issues off the Svelte repo and spend some time figuring out how to fix them so I can be more useful in the future when running into bugs in Svelte. I never managed to find the time to do that while on Crucible beyond filing a couple of issues we could get simplified reproductions for and I regret that. OSS works best when programmers are able to contribute back to the project and I think I could've contributed a lot more to Svelte.

XState #

We didn't start Crucible using XState but by the end of the project it was one of our most valuable tools. XState is a statechart library, statecharts aren't necessarily a concept that everyone is familiar with but they're essentially hierarchical finite state machines. They're more useful than FSMs because they don't suffer from combinatorial explosion issues you see in a FSM when many states can transition to many other states. Because they have a hierarchy to them you can create discrete layers within the machine and group states and their transition events in logical, understandable ways.

Before we used statecharts though, we used Page.js as a router. Client-side routers are pretty good for more traditional SPAs, they do a great job mapping a specific URL to a specific component view. We worked like that for months at the start of the project but the issues kept slowly building up. This isn't a knock against Page.js or routing in general, more of a slowly dawning realization that we needed a more powerful tool. We were running into a lot of issues around incorrect states, back button shenanigans, broken links, and general inability to understand what the legal routes were from any given route.

As I got more and more bothered by this situation I started doing some digging on a topic I had wondered about before but never had time to investigate: was there a router where you could define only certain other routes as valid while on that route? After thinking critically about that desire for about fifteen seconds I realized I was imagining a finite state machine. I went looking for routers built on top of a FSM and came up broadly empty-handed. I started to get worried that I would need to build one myself. Then I got the combination of keywords exactly right and stumbled upon abstract-state-router. Turns out that I was unsurprisingly not the first engineer to think that a FSM or statechart might be the solution to our routing problem.

Unfortunately, abstract-state-router wasn't quite right for our needs. It had a Svelte integration but we had some troubles getting it working and the way it married together URL path/query management with routes but also a light statechart was more confusing than actually helpful. The big ah-ha moment for me in figuring out how to solve this problem was when it occurred to me that I didn't actually care all that much about the URL. Our users would ideally never even know that the UI they were seeing was drawn by a web browser, so if we abandoned that limitation we could get a bit more wild with our application structuring.

Forget routing, we're using states #

Enter XState.

Well, sort of. I was still new to statecharts and sort of overwhelmed by the whole idea so I needed to find a small feature I could prototype XState with where it would make sense and add recognizable value to a flow/process that could use more structure to it. I found that process in my personal nemesis at the time: the matchmaking flow. The flow when a client decided to search for a match was deceptively complex on the UI side as we managed multiple sets of inputs and outputs between us and the services backend. We also didn't entirely own the matchmaking state in the UI but owned a small bit of it, so reconciling the two sources of truth there had always been problematic. It had also grown organically and been put together rather quickly so even my first few passes at wrangling the spaghetti monster it had become hadn't really made it that much clearer. It made for the perfect XState prototype.

I built out the flow based on services state and UI state, I defined all the valid transitions and where they'd go, I wired up the inputs/outputs to it, and then I turned it loose on matchmaking in an actual client and it worked beautifully.

I could look at the statechart and see the entire flow through the states, I could see every valid transition that the statechart could take and where it would go, I could paste it into the XState visualizer and actually see a DIAGRAM OF ALL THE STATES AND WALK THROUGH THEM INTERACTIVELY.

It was amazing. I was absolutely hooked. Once the prototype got polished up a touch and committed I asked one of the more senior UI Engineers on the team who was leaving soon if they'd do a quick spike on an idea I had:

"What if we used XState to control the entire UI?"

Since I had already been looking for a better solution that a traditional client-side router, and the XState prototype for the matchmaking flow had gone so well, what did we have to lose?

The engineer spent a few days putting together a very rough spike of the idea, which was essentially that instead of using a URL to pick the component to draw we'd let the statechart state do that. We could then nest components inside of each other using the hierarchical nature of the statechart so that things like a layout component wouldn't even need to really care about what the statechart was doing at any given time. All they'd need to know was "I have children components, I should draw them". When the statechart transitioned to a new child state it could then generate a new tree of components and Svelte would take care of persistence for us so that the same top-level layout component instance would be kept if only the child state changed. Using the statechart like this also meant that it would hopefully be possible now to tell what components would draw in any given state, because you could see them listed explicitly in the statechart definition.

The engineer finished their spike, it worked, it seemed awesome, and then they left the team. I didn't have time to work on it any further so shelved the code until I had time to finish it up and integrate it into the Crucible UI. Our codebase was reasonably small at the time, so while this was a drastic change it seemed totally within the realm of possibility.

Except then I uh... lost the code.

Their account had been deleted and I had only saved a changelist number on their account. Oops. The good news it that gave me a chance to get very familiar with the code that would listen for the statechart to transition, walk the selected nodes, and build a nested object of components/props. I got very familiar with that code because I had to write it all. From memory. Learned an important lesson that day.

It ended up taking me about a week but I eventually pulled out all of our routing infrastructure and replaced it with two statecharts, one for the Main Menu and one for the HUD. Took a bit to get the rest of the team on-board since statecharts were such a sharp left-turn from what we had been doing before but by leaning on the changes to the matchmaking flow as an example of the utility of the statechart approach we got there.

Once the small prototype-y version was in place it was time to add some features. One of the biggest things I had been dying to get for some time was per-component code splitting via import(), since we were using rollup and Coherent GT natively supported dynamic imports. Building our app this way meant that by adding support to our core tree-building functionality we could then make each component at each level a dynamic import() so we were only loading the code necessary for a given state. Since component loading was all off the local disk this didn't add any end-user perceptible delay to the state change (1-2 frames at most for code we didn't already have) but it let us have a lot less code actively loaded into the browser at any given time. Dynamically loading code this way was broadly successful at those goals but we eventually had to walk back from it in a few specific places where switching back and forth rapidly between states could lead to some strange module loading errors from Coherent GT.

Another big benefit we were able to gain from switching over to XState for our application state is that XState supports invoking child machines, essentially nesting a whole other statechart within a state. It sounds a bit like madness, but it's actually super valuable when structuring statecharts. The child machine has its own context object, fires its own events, can fire events into the parent machine, it's a lot like taking on huge function and breaking it out into smaller functions. Only better, because ✧・゚: ✧・゚:statecharts are magical:・゚✧:・゚✧. It's an invaluable way to manage the size of the statecharts since writing out logic that way can be pretty verbose. The original support in our tree-building layer only allowed for children machines to render components, not grandchildren or great-grandchildren, but even still it was a hugely effective way to cut down the size of our statecharts to something more manageable.

We used this setup for quite a while without significant changes until I started getting fed up with how often our tree-building logic ran on transitions, often times rebuilding for no reason despite no meaningful changes happening. I decided on a flight that I would make a clean-room re-implementation of it without looking at the original but shifting my focus from "can this work" to "let's make this work more performantly". This eventually became xstate-component-tree and provided some significant performance boosts over the original prototype. It caches the result of resolving components and props for states, doesn't re-run on statecharts that don't have any components to redraw, and also changed the output format of the components and props so they were more usable from within the app. Swapping over took only about a day, but the amount of time saved over the lifetime of an application instance was significant.

This approach comes with a couple of notable downsides, though none that we found bad enough to seriously consider changing our approach. One of them was actually brought up by the author of XState, David Khourshid, they made the point that tying components directly to states makes it harder to refactor states because you also have to rework the component ordering in that case. I can see that concern but we never found that practice terrifically onerous. All the other approaches I've seen for tying the statechart to component rendering like XState activities, XState context, or XState's matches() method all had problems at least as annoying without the extra clarity of seeing exactly what components would render for a given state.

The other issue with this approach is that any component that wants to be able to render children within it has to explicitly opt into that. We built a helper to simplify it as much as possible so it's really only adding two things, but it's still manual work that had to be done.

<Children {children} />

<script>
export let children = [];
</script>

The <Children /> component there is a very small one and iterates the list of component & props it is passed, using <svelte:component> to dynamically render them along with any children they might also have below them. The explicitness of placing <Children /> in a component was generally a beneficial thing in hindsight, without it there would've been no mechanism to allow component authors to wrap the child components in any sort of DOM structure. We considered approaches that took advantage of <slot> but that approach requires you to invert the component structure in a way that didn't make a ton of sense. At the time we were considering these approaches there was also no runtime way to interrogate a component about what slot info it had been passed, this has happily been fixed with the addition of $$slots in a recent Svelte update.

It wasn't entirely sunshine and happiness with XState, though I lost count of the number of times a member of the team professed "I love statecharts" to me. There were a few things about the transition that took a bit for us to become numb to. Chief among those was the sheer size of a statechart, while that brings incredible advantages in being able to see the entirety of the logic laid out in front of you the JSON-ish structure of it can also pose quite a challenge to read. This is still challenging with authoring statechart code but the addition of the XState visualizer made it way more straightforward to really see what the statechart was doing. We still definitely ran into places where XState didn't do quite what we had expected and those could take a couple of hours to identify the issue, create a small reproduction, and then report the issue on their github. Sometimes we found legitimate bugs, sometimes we were doing something kinda strange and there was a better approach, and once or twice we proposed new features that didn't exist and they got added. The maintains of XState are very dedicated to teaching the world about statecharts and we absolutely benefitted from their expertise.

Moving a large portion of our application logic to statecharts was also an... uncommon thing to do. No one on our team had any real experience with them, I'd done some work in college with finite state machines but never anything to the level like we were doing for Crucible. There was a real learning curve there, and a lot of consulting the XState docs both for me and the other engineers. I also spent a lot of time teaching how to think via statecharts, usually via examples on a whiteboard or by waving my hands around a lot while having a conversation with another engineer. These conversations could be time-consuming but I still feel like we gained a lot of valuable understanding out of them, and I've had multiple engineers on the team tell me that they'll be pushing hard on using statecharts in future projects.

There's nothing better than knowing that you've broken the brains of a few of your coworkers forever, after all.

rollup #

Like most modern web apps we needed a way to bundle our code together. While our browser platform did support ECMAScript modules so in theory we could've avoided bundling it would've come at the cost of many more file loads as the browser resolved the dependency tree. Since we already knew we wanted to do build-time transformation of .svelte and .css files using a bundler to combine and optimize our .js files was a natural next step. At the time we were deciding on the technical foundations for Crucible UI there were really only two realistic choices, rollup or webpack.

Given that we were targeting only a single platform (technically 2 because we supported running the UI standalone in a browser) and knew that we'd need to be very careful with runtime dependencies a lot of the strengths of webpack weren't a huge priority for us. That combined with rollup's smaller output due to tree-shaking, faster output due to a lack of function wrappers around modules, and simpler plugin api made the choice to use rollup pretty clear. Not going to pretend that there wasn't also some personal bias in there since I've contributed code to rollup in the past, but I did try and get webpack a fair shake.

Rollup ended up being broadly the right choice for Crucible's UI thinking back on it. Our build times were always reasonable though definitely starting to bloat out a bit towards the end of the project. My local machine never got higher than about 35 seconds for a brand-new build, and rebuilds usually took anywhere from 5-8s depending on how taxed my machine was. More recent versions of rollup are adding support for better caching that can be persisted to disk which I suspect would have a significant impact on that cold boot build time.

The ease of writing plugins for rollup led to some really valuable tools for the project.

There was a custom localization plugin that imported the XML storage format and output a series of JS chunks that contained large JSON translation objects wrapped in JSON.parse(...) when they exceeded 10k to take advantage of the faster parsing per Mathias Bynen's Research. These chunks could then by dynamically loaded as the UI got updated locale preference info from the game.

We also were able to import icon from "./file.svg"; and get back a SVG <symbol> reference we could use to show SVG images loaded from a sprite thanks to a small custom plugin. Combined with a technique using CSS variables for fill and stroke the use of build-time generated SVG sprites let us clean up a lot of the rough edges on our SVG presentation while still keeping a lot of presentational flexibility.

We served the browser build off of a tiny plugin that started up a sirv instance. We even generated a HTML file from each rollup entry point that was able to hook into the Modular-CSS graph and ensure that we statically loaded all the CSS required for each entry chunk. There was another custom plugin that existed solely to strip any mocks or mocking infrastructure out of the game builds to ensure we never accidentally shipped them to end-users.

These were all relatively small things that often exist in various forms already but our various unusual requirements/environment precluded us from being able to use off-the-shelf versions.

Were I to start a new project today I'd definitely spend some time investigating snowpack but would have to figure out how to get Modular-CSS working with it first.

Modular-CSS #

The choice to use Modular-CSS was never really in question. It's based on an idea so good that I built an implementation for myself because I wanted to use it but nothing existed that worked for my environment. CSS Modules is a banger of an idea and I wish that more people understood it and took advantage of the simplicity and expressiveness it provides. The selector scoping per file, trivial composition with composes, and general it's-all-still-CSSness of it made it easier for new engineer to pick up and run with but kept us safe from class names conflicting or extreme selector depth. A team of up to 8 engineers were able to all contribute to a reasonably-large codebase without introducing any CSS that I would consider "write only" CSS, or CSS which you can write but never change later due to a fear of breaking something unrelated. I find that level of confidence in a project this large pretty surprising. All the more so given that I've made "write only" CSS decisions when working on solo projects for fun without even meaning to.

There was a few things that we ended up needing from Modular-CSS while working on Crucible that it didn't provide out of the box but fortunately I know the author. It's me. I'm the author.

One of the most useful actually was already in Modular-CSS but was never part of the CSS Modules spec: :external. I'm unclear how one could actually build a CSS Modules project of any significant size without it given the alternatives and workarounds I've seen proposed on GitHub but fortunately I built that some time ago for Modular-CSS and we didn't have to worry about it. One feature we did end up using more than I expected was @composes which essentially lets one file masquerade as an entirely different file. We found that most useful for supporting multiple different game modes in the HUD, we could define one base gameplay HUD that all the others were based off of and then use @composes to make surgical updates to it while still keeping almost all of the exported classnames the same without having to repeat them.

Given the performance concerns we had building game UI something that could output raw CSS was something I considered an absolute must. None of the CSS-in-JS solutions at the time had a truly zero-cost runtime implementation so we never seriously considered them. This has improved significantly in the intervening years but this is one place where I felt like hewing closer to folks previous experience was valuable. Writing good CSS is hard enough without asking engineers to learn completely new workflows and Modular-CSS struck a particularly nice balance there. With the @modular-css/svelte package we were actually able to improve further on our bundle size and output speed. Prior to its introduction we had used Modular-CSS within Svelte like this

<div class="{css.box}">...</div>

<script>
import css from "./module.css";

This approach meant that Svelte helpfully set up a bunch of infrastructure for every component watching that css value to to compare its dirty state, check the dirty state, and update elements if it ever changed. But css in this component is static, it literally cannot change after its build-time creation. To work around Svelte's extremely-helpful-but-unwanted-help we used a package that provided a Svelte preprocssor to scan modules before the Svelte compiler saw them and inline any fully-static CSS classes wherever possible.

<!-- before -->
<link rel="stylesheet" href="./module.css" />

<div class="{css.box}">...</div>

<!-- after -->
<div class="abc123_box">...</div>

After the preprocessor had done its work there was no runtime code generation required at all for most of the class replacements we were using so we paid no cost in terms of bundle size for that functionality while still keeping all the benefits of Modular-CSS.

Another seemingly-small thing that saved quite a bit of time during development was handling elements that could draw in multiple states outside of Modular-CSS itself. It seems counter-intuitive, but using composes and having multiple classes for an element that can be in multiple states was generally hugely overkill for our needs. Instead we tended to have a single root class for a given piece of the UI and any visual changes that needed to be expressed due to changing factors were handled via data- attributes. These were nicely expressed in the CSS thanks to PostCSS Nested and helped to keep our changes logically grouped in the CSS itself. Since Modular-CSS class names are modified during the build to keep them unique across the project the use of attribute selectors prefixed by these classnames didn't expose us to any selector overlap concerns but allowed us to write flexible styles that could adapt to a variety of changing conditions.

/* Without data-attributes */
.thing {
/* basic styles for thing go here */
}

.disabled {
composes: thing;

/* styles that need to be changed when .thing is disabled */
}

.active {
/* ditto, but when thing is selected */
}

/* With data-attributes */
.thing {
/* basic styles for thing go here */

[data-disabled="true"] {
/* styles that need to be changed when .thing is disabled */
}

[data-selected="true"] {
/* ditto, but when thing is selected */
}
}

This allowed us to use much more natural-reading template code to apply the changed visual states to the element.

<!-- without data-attributes -->
<div class="{disabled ? css.disabled : selected ? css.selected : css.thing}">
...
</div>

<!-- with data-attributes -->
<div class="{css.thing}" data-disabled="{disabled}" data-selected="{selected}">
...
</div>

This approach does open you up to accidentally applying conflicting styles as a theoretical problem but it was never something we actually ran into in practice.

Being the person who maintains a critical piece of your infrastructure can be great at times (you can get really good support) but it can also come with a fair share of drawbacks. If I was busy or burned out and a bug with Modular-CSS cropped up there was a good chance that it wasn't getting fixed until I was able to get out of my funk. There's no other contributors to that project of any significant size so it also meant our bus factor was on the scarier side. This concern was somewhat lessened by Modular-CSS being a reasonably stable and proven project but we definitely found some new places where it wasn't working as well as it should. As we got further down the code-splitting path via import() we discovered that Modular-CSS didn't do a good job allocating CSS files into bundles that matched the JS bundles rollup produced. Reconciling those changes was a week-long project for me that also required several small multi-hour bursts to get truly correct. I suspect there's still at least one bug in that implementation but fortunately rollup's support for non-JS files has improved to the point where the custom chunking logic in Modular-CSS can be thrown out.

PostCSS (via Modular-CSS) #

Modular-CSS is powered by an amazing piece of software called PostCSS which is what gives it the ability to parse/rewrite/create CSS from JS. PostCSS is a very powerful tool and the ease of writing plugins for it has led to a huge collection of additional functionality you can add to it. Since Modular-CSS exposes the ability to add PostCSS plugins to your config we were able to take huge advantage of both existing plugins and new ones that we created specifically for our needs while building Crucible.

Autoprefixer allows for writing prefix-free CSS and means that I haven't had to think about browser prefixes in years. Most evergreen desktop browsers have begun moving away from even requiring prefixes on properties to enable them in a cross-browser way but given that the engine powering our UI is a few years older there are still plenty of prefixes it requires. With autoprefixer and a reasonably-accurate browserlist config setting we could trust that only the rules that needed changing would change and we'd never have to think about it. This was especially useful because of how our UI could be built for the game (more prefixes needed) or for desktop Chrome (many fewer prefixes needed).

PostCSS Nested is another hugely valuable PostCSS plugin that adds a relatively small thing. Being able to nest CSS rules so that you don't have to constantly repeat (and potentially mistype) selector prefixes is a huge developer experience win. In my experience nesting can also lead to really out-of-control selector specificity so we strictly limited nesting to 2-3 levels deep and anything further needed to be looked at extremely carefully. We didn't ever automate that checking via stylelint though it would have been very valuable to do so to reduce the code review burden for reviewers.

postcss-functions was extremely useful for injecting custom build-time functions into our CSS to automate away things that were complex enough that they'd be likely to be mistyped. The font-scaling approach we took required that every single font-size declaration involve a css variable and a bit of math so we wrote a fontscale() function that took a size in rem and then output the correct values. We also had functions for opacity() and lightness() that could be used to take colors from our shared, centralized list and adjust their opacity by converting them from hsl to hsla or their lightness value to reduce the need for so many defined colors. All of these functions were useful in reducing our maintenance costs and avoiding introduction of bugs via mistyping.

One of the first custom PostCSS plugins we started using allowed for using information about the build type to make decisions around CSS styling. We had some elements that needed slightly different CSS treatments in-game versus in the browser, or we even completely hid a few elements in tests because they added visual noise and weren't valuable there. We had a list of defined values that were shared across the Svelte templates and the CSS and the custom plugin took care of rewriting any media queries that used them.

.rule {
/* always in the build */

@media ISBROWSER {
/* only in browser builds */
}
}

I've been using the * border-box hack from Paul Irish essentially since the article was posted. The default of content-box is correct in that it matches the spec but I've always felt that IE got it right and including borders & padding in the overall width and height of an element is the correct choice. Unfortunately, it was discovered by Ryan McMillan while doing perf audits that the * selector was inflating style recalculation times across the entire app by 1.5x our entire budget so Paul's hack had to go. To continue being able to take advantage of the joys of border-box without having to remember to splat it in anywhere that specified a padding or border Ryan built a custom PostCSS plugin. If it found any of those properties being set in either their short or longhand versions it would inject a new declaration into the rule to change the box-sizing model. Fortunately after some testing Ryan found that this approach didn't dramatically alter our style recalculation times so it was the solution we shipped.

A slightly cleaner approach if we were to do this again would be taking more advantage of working within Modular-CSS and have it instead inject a composes rule pointing at a single shared class. That way there'd only ever be a single box-sizing rule in the entire codebase and the output could shed a fair bit of repeated weight.

/* authored */
.foo {
padding: 0.25rem;
width: 5rem;
}

/* build output */
.foo {
padding: 0.25rem;
width: 5rem;
box-sizing: border-box;
}

/* should've done */
.foo {
composes: borderBox from "/root.css";
padding: 0.25rem;
width: 5rem;
}

The most recent custom PostCSS plugin that was built for Crucible added support for an @repeat at-rule. We needed to be able to restart certain CSS animations from the beginning once they had already started and the best solution we'd found is to create two identical @keyframes definitions and swap the animation-name between them. All credit goes to this article on CSS Tricks for showing us this reasonably straightforward solution. Unfortunately like any situation where you need two blocks of code longer than... 1 line to stay identical we immediately started to see small cases of drift. It also made editing the animations significantly more time-consuming because you either had to edit both @keyframes blocks identically while prototyping or edit only one and then make sure you checked the right animation between the old one and the new one.

@repeat was my attempt to solve this in the best way I knew how: WITH BUILD TOOLING. It's my answer like 80% of the time I know but it's because it's usually a really good answer. Other solutions were investigated like postcss-for but they all did more than we wanted and had syntax choices that made it harder to understand the intent. Building out a small plugin to enable use to duplicate arbitrary blocks of CSS an arbitrary number of times made the meaning of what folks were seeing pretty unambiguous.

@repeat <iterations> { /* rules */ } would repeat anything you put inside the brackets <iterations> times by cloning the nodes in the PostCSS AST and then appending the iteration count to the end of the name. This did lead to warnings from stylelint about undefined animations but we manually silenced the warnings in the few places where we needed to use this approach.

Here's a simplified example of what it looked like

/* authored */
@repeat 2 {
@keyframes anim {
to {
color: red;
}
}
}

/* build output */
@keyframes anim1 {
to {
color: red;
}
}

@keyframes anim2 {
to {
color: red;
}
}

By removing the need for a human to hand-maintain two duplicate blocks of oftentimes extremely complicated animation code we were able to speed up animation authoring while also making it much safer.

ESLint #

Keeping to a consistent coding style is valuable on a team of any size and we were planning for Crucible to have up to 10 UI Engineers working on it. Getting an eslint configuration set up was one of the first things I did on the project. It started out with importing my own personal eslint config (@tivac/eslint-config) but then customizing it for the task at hand.

Since I was only UI engineer on the team at this point it made establishing a code standard very, very easy.

Maybe... too easy?

The code standard was enforced both programmatically by our continuous integration server as well as manually where applicable in code reviews. I tried to refrain from pointing out stuff that the linter would catch but sometimes marked up code that was ambiguous so more context on the issue could be given. Our config gradually updated over time as various changes were proposed and a quorum was reached. To help keep the consistency high and reduce the load on programmers we enforced fix-on-save so that every time they saved in their editor. This did lead to some rare instances of false-positives that rewrote code in unusual ways but they were undone with a quick ctrl+z and an // eslint-disable-next-line comment.

Shortly after we figured out our testing story it became clear that a single global eslint config wasn't going to work. Fortunately it supports a concept of overrides which allow you to specify different rules for files matching a glob selector but all wrapped into a single config. By using several sections we were able to have customized rules for UI code, test code, tools, etc and weren't running a bunch of Svelte-specific rules against Jest code or anything silly like that. Being able to segment configs like this became more and more important as the project went on and more rules were written/used because we definitely started to notice slowness in the linter.

We used a few existing community plugins as well as a sprinkling of custom plugins. The community ones were eslint-plugin-import to help keep all of our file references consistent, and eslint-plugin-jest to help avoid a bunch of testing footguns, along with eslint-plugin-svelte3 so that we could lint our Svelte JS. For custom plugins had some that were very Crucible-specific things like ensuring that we were only calling valid C++ endpoints or all our custom Jest expect methods were always called with await in front of them. There were also some that were more general purpose and were designed to do things like ensure that we used Svelte stores correctly or enforce consistency by ensuring that we used destructuring whenever we needed to pull a single value out of an object in Svelte reactive statements. Custom plugins were very valuable in these cases but weren't exactly free, each one took several hours to a day to build depending on the amount of AST spelunking that was required to get the selectors correct. Fortunately https://astexplorer.net exists and is incredibly useful when prototyping these sorts of things.

One of my biggest disappointments was that I was never able to find enough time to prototype out a way to actually lint the Svelte template code. We could lint all the JS inside the templates using the Svelte plugin and that was hugely valuable, but we were never able to actually lint the markup itself. This led to several ad hoc changes to the template coding standard since there was no single source of truth or automatically applied standards. These weren't complicated style rules either, an example was that "Any element that has more than one attribute should have each attribute on its own line" or "all attribute values must use double-quotes", the sort of usual thing you see from a style guide that feels like nitpicking when a human tells you about it but nobody thinks twice if it's the computer being a jerk.

I started prototyping a standalone tool in an attempt to solve this issue svelte-template-lint but it's never got very far. Duplicating so much of eslint's infrastructure never sat right with me so I set it aside shortly after picking it up every single time.

After doing some more research and thinking about it while writing this document I think that an approach where the Svelte AST is converted to the ESTree format that eslint expects would've been more productive. That way the plugin can provide a custom parser and rules all in one and the rules can take full advantage of the entire API eslint has built around finding and reporting issues. This alternative is heavily-inspired by graphql-eslint's approach and while I'm a bit leery of the challenges around AST compatibility I think those would've been much simpler than recreating eslint almost entirely.

Maybe next time I'll get it right.

stylelint #

The arguments for stylelint essentially echo exactly what I said about about ESLint. Consistent formatting, structure, etc is all very valuable on a team of more than 1 developer. I hook up eslint and stylelint even on projects where I'm the sole developer because they're incredibly useful at curbing some of my less-useful habits where I trade readability and clarity for terseness. Feels awesome & clever in the moment but writing code you can't understand later is always a recipe for disaster.

stylelint also used a mix of community and custom plugins though we leaned a bit harder into custom plugins in this realm. We had plugins to

For community plugins the most valuable one we used was stylelint-order which let us set up very specific rules about declaration ordering within rules. This led to several disagreements about the "right" order but the consistency was absolutely worth it. I wish that we'd been able to find more really valuable stylelint plugins to use given how big an impact stylelint-order had but fortunately most of the rest of the functionality we required already came as built-in rules.

Building the UI #

In a previous role I'd built a maintained a build pipeline with 20+ steps for really squeezing every single ounce of performance out of complicated web sites using a tool I built, dullard. While that approach had its value the optimization needs for the Crucible UI were more along the lines of "build it in production mode and minify the CSS/JS" so putting more logic into the rollup config seemed reasonable. I specifically wanted to try and streamline things considerably more this time around so I was immediately drawn to npm scripts for controlling rollup's various pieces of functionality. Small, easily-composable pieces of functionality like that which could be composed and orchestrated directly from package.json seemed like it could lead to a lot less build maintenance. Overall I think that given the needs of the Crucible UI project that was the right approach. The composition and using environment variables (mostly via rollup's --environment but also sometimes cross-env for non-rollup tools.) let us set up multiple flows using the same base setup without too much work. By the end of the project we only had around 10 scripts defined and several of those were low-level building block types of scripts that users never really interacted with.

The core functionality consisted of

Both npm start and npm run watch in that list ran the same very small rollup task, but passed in different command-line flags and environment variables that determined which rollup plugins would be loaded and how it'd behave. npm start for example actually ran npm run watch -- -- environment=SERVE,BROWSER under the covers, which itself was running rollup -- --config --watch. It was more indirection than we ended up really requiring but I appreciated how straightforward it kept each script in package.json. The overall size of the scripts block in our package.json wasn't the easiest thing in the world to manage and I would've really liked to be able to add comments, so the next time I try out this method I'd like to look into one of the tools that essentially proxies npm run-script commands like this into a custom tool that lets configuration live outside of package.json and include comments.

One of the least successful outcomes of this particular method of building is that npm outputs a ton of information, especially when a script breaks. For UI Engineers who were used to npm output this was generally a non-issue but when we had other disciplines building their UI locally or making small changes it never failed to lead to a panicky DM to me and a bit of confusion about what precisely had gone wrong. Setting up a .npmrc config file or appending --quiet/--silent to those scripts. End-users would've had less spam in their consoles that meant nothing to them and I could've re-enabled the default verbose output when debugging exactly what scripts were doing under the covers.

I learned pretty shortly when going down this scripts path that using the long-form version of every CLI flag was always the right choice.I read that... somewhere after I'd already learned it so I'm glad I'm not the only one who has run into that. Life is too short to have to go read the docs about a tool to figure out what --gi does when you could've written --global-identifiers and had a much better chance at understanding it the next time you come around looking to see what your script does.

Standalone Build Tools #

It wasn't often but there were a few times during the project where we needed a standalone or one-off tool to do some work. These ranged from tools that eventually were part of every CI run to a suite of tools I built to help trim down our totally out-of-control unique color usage that I deleted with gusto once we had pared down our color usage to a reasonable level. Being able to throw away code when it's reached the end of its useful life isn't always fun but if you think about it as less code you'll need to maintain going forward it helps.

Color Winnowing #

I've referenced this here and there in this document but at one point the Crucible UI had over 250 unique colors being used. These were defined inline in .css files scattered all over the project, some in .svelte files, it was a mess. I ended up writing a suite of scripts to help tame the monster we had created. There was a script that could walk the project and extract all the unique colors being used, another one that sorted them all by similarity, one that took that sorted colors and wrote them out as Modular-CSS @value elements using names from a list, and one final script that walked the entire project again updating colors in-place based on the new list of shared colors. Finding, parsing, comparing and writing colors back out in this way would've been a total nightmare if I hadn't used chroma.js for most of the heavy-lifting.

I hadn't spent a lot of time on color theory so I quickly had to go do some learning on how to even compare colors, my first attempts were laughably naive and ended up grouping colors in totally nonsensical ways. Eventually after learning about ΔE* (though admittedly not understanding all of it) I was delighted to discover that chroma.js supported delta-e comparisons! Using that I was able to cluster colors much more naturally and begin the process of manually filtering out colors that were so similar as to be imperceptible. Removing those similar colors got the list from 250+ down to about 70 and from there I was able to widen the diffs a bit and do some more manual checking to get down to a more-reasonable list of 50ish unique colors. I would've preferred something more like 10-15 but it was enough of a visual consistency improvement that I decided to not try and make any more dangerous cuts than I already had.

Having a list of 50 hsl() values was great but I had to be able to give them names that would mean... something to UI Engineers looking at this list. Naming colors is kind of hilariously hard but fortunately color-namer exists and after another quick batch of scripting we had a list of 50 colors with names like "congressblue", "spacegray", and a few that I picked out by hand like "allyblue" and "enemyred". Another quick script to go back through all the existing in the code base and replace any last stragglers with their closest approve equivalent and I could finally almost rest.

Having a centralized list of colors was useful but I was hearing complaints that it made it harder to compare colors against each other or to know what color to use, since we had introduced a stylelint rule alongside the change that banned any inline color definitions to avoid the drift problem. A small rollup plugin that output an HTML file whenever the colors.css file changed and contained all the colors grouped by similarity and showing examples helped in that regard though I would've loved to come up with a better solution given more time. One approach that did help was installing a vscode extension like colorize so that by opening colors.css you could see a quick overview of all the colors together. For me personally that extension was enough to make the generated colors.html redundant but other UI Engineers preferred that approach so we used it throughout.

Path length checking #

Shortly after setting up the initial infrastructure for the Crucible UI I managed to break the entire team for almost a whole day. As I was installing dependencies and committing them to our Perforce server (because not committing dependencies is a crime) I managed to exceed the Windows max path length limitation of 260 characters and all hell broke loose. My local npm didn't care about the path length, and the Perforce server didn't care about the path length, but oh WOW did our CI infrastructure care about that path length. It quickly broke every single node that checked out the depot once the commit was in and once a node was broken in this way it was useless until the offending long path was removed and the node could be recycled manually. It also spread to any developer who pulled from the depot after the poison path commit and prevented various p4 operations from working correctly.

It. Was. A. Nightmare.

I was able to roll the change back reasonably fast but it still took out a variety of our build nodes along with multiple people in the studio who had the misfortune of syncing after I committed but before I reverted. Being the sort of person who tries to learn from my mistakes I quickly added a small script to our CI builds that would run prior to anything else and yell EXTREMELY LOUDLY if any path it found on the local disk was longer than 250 characters (I figured that left us a tiny bit of wriggle room). It was a small script that took me an hour or two to build, never got changed again, and saved me from exceeding windows path length restrictions at least twice after it got built. Considering how expensive it is to take down even part of a large video game studio those couple of hours I spent building that tool were incredibly valuable.

Image Validation #

Image assets for Crucible's UI came from all over the place with very little regularity of process around it, so partway through the project we began seeing issues with assets of incorrect dimensions or with other oddities. The clear way to fix this was some custom tooling! A small node script was written using image-size for getting asset dimensions and then comparing them against the engine constraints (power of 2 or at least a multiple of 4 in both dimensions, 1080p max resolution). This was integrated into the build pipeline so that eventually we could fail builds if new assets were added that exceeded those guidelines. We ran out of time to fix all the assets that we started out with that didn't adhere to those guidelines however, so the script never did actually get to fail any builds.

We'd also been using .svg files as mask-image values from CSS so that we could set a background-color on the element and have multiple colors for a single image. This approach worked well enough given the limitations in Coherent GT but had a couple downsides. The first was that image loading wasn't instantaneous and we didn't have a preloading pipeline in place for a while so users would see a solid rectangle of whatever the image fill color was until the browser could load & render the svg mask. That could be worked around by hiding the entire element until the image was loaded but was tricky to apply in some situations. The more serious issue was that depending on the resolution that was being used any .svg file where the viewBox didn't precisely match the contents of the <path> elements inside it would cause random lines of background color to bleed out on the sides. This was always extremely noticeable so we needed a solution to avoid it.

The answer was, big surprise, more build tooling. A script was written which could parse and determine the actual tight bounding box around all the constituent paths in the SVG by using svg-path-bounds and then comparing that against the defined viewBox property on the SVG. This was kept as a manual process since we were moving away from svgs as mask-image due to the loading issues, but was still valuable for places where the conversion hadn't yet happened. As we moved more and more svg usage into sprites using <symbol> the need for this step declined because we could specify coloring using CSS variables using a technique from fvsch.com.

Features #

As game UI that happened to be implemented in web technology the Crucible UI needed to support a few features that are uncommon in my experience with building websites. These all took a bit of work to come up with reasonable solutions in the tech stack we chose but were table-stakes for accessibility in a game UI.

Scaling font/UI sizing #

The entire UI, both the Main Menu and the HUD, needed to be able to be scaled by the user. This is an accessibility feature but like almost all accessibility features it's also a valuable thing for many more users than would normally be clumped under "needing accessibility features". Scaling the UI helps when running the game at 4k because sometimes the default 1080p-optimized sizing is hard to read when the pixel count is scaled up so much higher. Text size scaling was also supported independently of the UI scaling so depending on a user's needs they could tweak the presentation of the UI to best fit their situations.

The scaling problem was one we tackled relatively early on and the entire research, development, and implementation was handled by Jessica Chappell. The task as it was defined was very vague and open-ended, "the UI and the font size need to be user-scalable, and they need to scale independently". Jessica took that vague mandate and spent a bit of time researching and experimenting before coming up with a solution that lasted for the entire rest of the project's life.

The core of it was powered by two CSS variables, --ui-scale and --font-scale, a font-size set on the document, and then the PostCSS helper function that handled applying a calc(...) function whenever a font-size needed to be set on a child element. We also standardized on using rem for sizing of all the elements to enable them to resize trivially when the root font-size changed.

:root {
--ui-scale: 1;
--font-scale: 1;
}

html {
font-size: calc((0.5vw + 0.5vh + 0.25vmin) * var(--ui-scale));
}

/* Authored */
.text {
font-size: fontscale(1.1rem);
}

/* Build output */
.text {
font-size: calc(1.1rem * var(--font-scale));
}

Since the font-size was always tied to the root font-size via remand adjusted by the --font-scale variable we could have the text scale up cleanly with the UI size but then also adjust independently. This system was hooked up to a fake preferences menu in the game for a long time and worked really well. When the real preferences backend arrived we ended up being really slammed and it took a while, but once the hookup was actually made it all pretty much worked seamlessly on the first try!

Colorblind support #

Another important accessibility feature in games is supporting various colorblind settings to enable the widest range of users to understand and enjoy a game. As Crucible was a competitive fast-paced shooter this was even more critical. The initial technical planning for color-blindness support happened long before it was ever implemented in the game and this ran into a few challenges with our original plan and had to pivot.

The original plan was that we'd define the specific colors that would react to colorblindness preferences as CSS variables at the root of the app, then overwrite them whenever the preferences changed. This would be relatively straightforward and ideally require almost no plumbing throughout the app beyond making sure that for places that we wanted to respect the preference we'd need to use the correct color value. I passes this plan over to the engineer who would be implementing it with full confidence that it would work out great.

It did not.

We'd already used the enemyRed/allyBlue colors all over the HUD but had also used our existing opacity() and lightness() PostCSS functions to change the values in subtle ways to achieve the visual effects we were looking for. Due to the dynamic nature of CSS variables this combination was wholly unsuccessful, because the build-time PostCSS functions couldn't resolve the CSS variables to anything meaningful.

So... back to the drawing board. Lucas Hugdahl was working on the implementation and suggested the idea that we could potentially override colors by setting a flag on the document element representing the type of colorblindness and use that to pick the right values from a list that were all defined in the same central colors.css file. After talking through it a bit more we had a plan that would require some build tooling support from me and ended up working out really great in practice.

The build tooling was a custom PostCSS plugin that knew the list of colors that were reactive when changing colorblindness modes. It would walk all the rules in our CSS and if it spotted one of those colors it'd duplicate the specific declaration that used the color with a version that was colorblind appropriate and behind the data-attribute flag on the document.

Here's an example

/* Source */
.rule {
color: colors.ally;
}

/* Output */
.rule {
color: colors.ally;
}

[data-colorblind="protanopia"] .rule {
color: colors.allyProtanopia;
}

By approaching it this way we ended up with a minimum of duplicated effort, could support colorblindness modes in a totally static way, and continue using our existing tooling to adjust presentation of those colors to hit our visual targets.

I18n adjustments #

Perhaps the most website-like issue we had to solve was adjusting sizing/scaling of specific elements for specific locales. It's always good to get a shorter string from the translators if possible but that isn't always guaranteed to be a possibility. Our solution worked a lot like how we handled colorblindness: we stamped the document of the page with a data-locale attribute that represented the current locale. It's tiny, not at all hard to build, and provides an identifiable styling hook that's nice and clean to implement when using PostCSS Nested.

/* Authored */
.rule {
width: 3rem;

[data-locale="de-de"] &
{
width: 3.8rem;
}
}

/* Output */

.rule {
width : 3rem;
}

[data-locale="de-de"] .rule {
width: 3.8rem;
}

This was generally all that was needed to fix case-by-case issues with translation strings. There were a few locales that got an across-the-board font-size bump (usually a bit smaller) but generally we tried to fix things in each location as much as possible so we could make the most informed decisions instead of nuking the problem from orbit. This led to more hands-on work fixing localization issues but also gave us much higher quality on how the bugs were fixed and the ultimate presentation to the end user.

Svelte actions #

Actions in svelte are lightweight reusable pieces of functionality that you can mix into DOM nodes. Crucible used them for aeasily adding interaction audio cues to elements, firing statechart events on click of elements, binding elements to keyboard shortcuts, removing completed CSS animations, showing tooltips on element hover, and making up elements into sections & grids for supporting 2d menu navigation via gamepad or keyboard. Overall we used svelte actions almost 400 times.

Constants as named exports #

We used a lot of SCREAMY_CASE constants for values that mapped directly to game values, and found it very valuable to have those exposed as named exports so that we could have a single canonical place for each value to live. This was used for things like game modes, match phases, music states, input events, and many others. This allowed for a lightweight approach to standardizing and validating without the constant cost of a full-fledged types system.

Custom subscriptions manager #

One of the longest-lived pieces of code we had on the project was a tool I wrote that could be used to manage a pool of subscriptions. You'd add new cancellation calls to it with an optional name, then could cancel subscriptions by name or all at once. Crucially if you added another canceller function to it with the same name as an existing canceller it would automatically run the previous canceller. This helped to prevent a huge class of errors around forgetting to cancel previous subscriptions when re-creating due to changing information or forgetting to clean up old subscriptions when unmounting a component. We also built a custom ESLint rule for svelte components to ensure that if you created a subscriptions manager instance you destroyed it in the components onDestroy.

The custom subscription manager we built was so successful that we eventually wrote small wrapping functions for attaching DOM listeners as well as setTimeout and setInterval that all returned canceller functions so they could be managed as well.

Externally-resolvable Promises #

The ideal of a Promise is that you only ever need to resolve or reject it from inside the callback function. That is a great ideal but sometimes you need to be able to resolve or reject a Promise from outside for whatever reason, for us it was usually when we'd have to wrap several hundred lines of code inside the Promise callback. We wrote a small wrapper around a Promise that we called deferred.js to handle this. It was maybe a small subversion of the intent behind the Promise api but we found it useful.

import deferred from "shared/deferred.js";

const done = deferred();

// ... some time later...

done.resolve("YAY WE DID IT");

UI Performance #

Performance is always a huge priority for games and while this was true for Crucible we didn't really have the infrastructure in place early on to be able to really keep tabs on things. We certainly made a bunch of choices while working on the UI that were general web tech best practices but ended up not necessarily being the right choice for a fast-paced competitive game using Coherent GT. When we finally started getting performance info about the game as a whole and the UI in particular we found some places where we were significantly below where we wanted and expected to be. Ryan McMillan did a massive perf audit of the entire UI and found some really wild results, and then had to come up with some even wilder solutions.

Svelte transitions #

Transitions in svelte are incredibly useful and were used heavily throughout the project. We had made the unfortunate assumption that they would be fast since they are small pieces of JS that dynamically generated a CSS animation but one of the first performance issues we had on the UI was something causing full-screen repaints and layouts. It turns out that in the version of Webkit that Coherent GT is based on injecting a <style> tag into the document forces a full-screen layout and repaint which is very expensive.

Our solution was to remove the entire CSS generation aspect of the svelte transitions by scoping down the transitions we needed to as few as possible, defining static CSS animations to support them, and configuring the animations via CSS variables and custom transition logic. I did some prototyping to prove that the idea could work and then Ryan took over and implemented a solution that we were able to use for the rest of the project.

@keyframes fade {
from {
opacity: var(--opacity);
}

to {
opacity: var(--opacityend);
}
}

Instead of the built in transition:fade our custom fade transition would calculate the starting opacity for the element, combine that with the direction/duration/delay of svelte transition args, and then add those as styles on the element (taking into account any existing inline-styles or animations). Resulting in a style attribute that looked something like this:

<div style="--opacity: 1; --opacityend: 0; animation: 400ms ease-out fade both;">

This approach let us keep the developer ergonomics of the svelte transition syntax while still being mindful of performance in our custom browser environment.

The * selector #

The * selector caused a massive perf hit to our per-frame times whenever styles were recalculated, which was often. I covered that in more detail in the section on Modular-CSS.

Layers #

Coherent GT, like most web browsers, can promote certain elements to the GPU as a texture. Their documentation calls these textures "layers" and there's some specific rules around how many you can use since they're not free in terms of memory. They are extremely valuable to have whenever there will be an element with a transition or animation that effects its transform or opacity values because it avoids the need for a new layout & paint pass on every single frame.

We weren't being terribly careful about our layer creation originally and after a few months Coherent GT started spitting out warnings that we were creating too many of them. The solution was to have Jessica Chappell go through the entire HUD finding every single rule that created a layer and figuring out how far we could move it up the stack to limit the number. This was a painful manual process that we didn't want to repeat so we worked on some stylelint rules that would warn on layer creation to ensure you had to be very explicit about meaning to create new layers.

During Ryan's perf audit we found that the original de-layering pass had actually moved some of those layers too far upwards and now they were costing us a lot during layout of the page, so from then on out we had to carefully balance the number of layers we used with repaint times. A lot of our concern about layers was lessened once we figured out a more effective approach to tightly scoping style recalculations because it shrank the overall size of the DOM so much and having too many layers actually cost a surprising amount during style recalculation.

<RadialTimer /> #

Another perf pain point was any time we needed to show a radial timer on the screen. These were the standard timer element for Crucible so they could be everywhere. They were used for events, interactions, medkit applications, ability charges recharging, etc. Performance for these was hugely important due to the sheer number we had so they went through multiple revisions and approaches before we got the cost down to where we were comfortable with. The first iteration of our <RadialTimer /> component used a dynamically-rendered <svg> element drawn by Svelte where we'd adjust the stroke-dasharray and stroke-dashoffset values from within a JS-driven loop much like the built-in draw animation that Svelte offers. When this wasn't getting us the performance we wanted Ryan came up with an interesting solution: two <div> elements that would rotate around each other as the timer filled. This improved performance enough to keep for a while, but was doomed to need replacement in the next round of perf audits.

The final version that was on settled on used a <canvas> element and some clever requestAnimationFrame usage to draw a path along an arc on the canvas element, but only the pixels that needed to change each frame. This finally got the cost of the <RadialTimer /> down to the point where having up to 7 or 8 of them onscreen at a time wasn't totally killing the UI performance.

Rapidly-changing text is 💀 #

Another nasty performance issue we ran into was text that changed quickly, this could be something like a distance numbers on a nameplate, your character's health, or even the ammunition counter that some characters had. We originally built all of these as regular HTML text nodes but as the perf audits kicked off it was noticed that they were causing reflows, repaints, and style recalculations as the text changed. The rate of change combined with the number of elements was making it seriously drive up our frametimes.

Ryan proposed a clever solution to replace the bare text nodes with a series of sliding textures that could snap to the correct location via a CSS transform to represent rapidly-changing numbers. Nissy built a version of this into a component we called <Number /> and while it made a dent in the frametimes it wasn't enough. Eventually much like with the <RadialTimer /> component we needed to move to a solution powered by <canvas> because the way browsers lay out text was too expensive when it needed to be able to change rapidly during gameplay.

Tightly scoping style recalculations #

Throughout all of these perf audits a huge amount of Ryan's time was spent understanding and isolating why we'd occasionally see large spikes in our style recalculation and page layout times. Eventually with enough data and testing a hypothesis was formulated but it wasn't one that spelled good news for the UI: we had too many elements on the page. Cutting down the number of elements on the page would've been possible but we had already had to rein in our usage of some advanced styling because Coherent GT didn't support it reliably so getting the total element count down would have been tricky.

So Ryan did what Ryan does best, run some experiments and propose a radical-sounding idea.

What if we used <iframe> elements to scope parts of the HUD?

I want to say that this was my reaction

But honestly I suspect my reaction was more along these lines

It was a pretty unusual idea, and certainly sounded challenging to make work. The more we talked through the practical implications of it and the potential performance upsides the more Ryan won me over. Eventually I came around and agreed that we should start figuring out a way to make that work that didn't dramatically change the workflow for the team or require really terrible manual labor any time we wanted to isolate a component for performance reasons.

What Ryan came back with was a custom component that could be passed a list of Svelte components and properties and would somewhat-magically wrap them all inside an <iframe> despite them all living in the main page's JS context. This side-stepped the need to have multiple listeners for the data and extra instances of our core functionality so the overall memory impact was pretty small. After a bit of goofing around coming up with a good name we ended up calling the component <PerfJail />, and then Ryan started implementing. When fully deployed the HUD used about 8 instances of <PerfJail /> which added a negligible amount of memory to the UI but for some components could cut style recalculation times down to a tenth of what they had been before. Overall UI frame times dropped by over 100% and it also helped to corral the occasional very spiky long frame times we had been seeing here and there.

With a more updated browser platform this would have been a lot simpler, because what we had invented was essentially CSS Containment but since that wasn't an option a Svelte component that could create an <iframe> and manually render components into it was the next best thing. One of the challenges with the implementation being based on <iframe> tags is that styling from the parent page doesn't cross the barrier into the tag so Ryan had to build up a somewhat elaborate system to count all the <link> elements on the page and re-create them inside each <iframe> so that all the appropriate styles existed. There was a rough plan for how we could instead hook into the Modular-CSS dependency graph some day to build up the list of necessary styles at build time but those never got built out due to time constraints.

Fast-changing CSS variables #

There were a few other surprises discovered before the perf audit that we had to move away from in favor of different approaches. Throughout the UI we had used CSS variables to simplify dynamic styling that could change based on component props, but much like what we found with browser text and time we used that approach we saw huge style recalculation times per frame due to the cascading updates in the CSS engine. Removing the variables in favor of inline style attribute overrides brought those times back into budget. CSS Variables were fine for things that wouldn't change often but anything that updated more than once a second wasn't going to fly.

Completed CSS Animations #

Maybe the thing that took me by the most surprise during the auditing was the discovery that CSS animations would forever impact our style recalculation time even after they had completed.

Ryan was able to work up a reasonable solution to this particular issue but not finding it until late in the project made applying that fix much harder. The way the fix worked is that a Svelte action was added to any element that had an animation. It'd listen for the animation to finish on the element and set a [data-animation-removed="true"] flag on it. The other part of the fix was for us to go through every CSS animation in the HUD and copy the end result of the styling that would be applied, then manually duplicate that into a child rule using the data-animation-removed attribute that also blanked out the animation-name value so it was no longer applied to the element.

.foo {
animation-name: animated;

color: white;

&[data-animation-removed="true"]
{
animation-name: none;

color: red;
}
}

@keyframes animated {
to {
color: red;
}
}

This was a slow, somewhat laborious process that would've been fine if we'd been doing it all along. Going back and need to retrofit the entire HUD with it was painful but necessary to help get the UI closer towards meeting its budget.

Floater positioning #

The UI layer for Crucible was responsible for drawing almost everything that wasn't 3d in the game, which included anything that floated around on screen. This was used for showing nameplates of friendly and enemy characters, in-game objectives and their distance from you, and every ping that you or a teammate placed in the world. In our original iteration the UI was responsible for sending a message via an endpoint asking to track a specific object or position in the world. From there C++ would then send the UI updates at a rate of one update per frame on the location of the tracked target. This worked fine at first but under the microscope of our perf auditing some issues began to appear.

The sheer number of things & positions in the world that needed tracking began to cause significant hitchy frames due to the garbage collector pressure that all of the large & constantly updating position endpoints responses created. We never built a system where an endpoint could continually update a specific object or anything so on every frame we were creating a new response object and then several seconds later a garbage collector run would spike a UI frame through the roof once all the disposed objects were cleaned up.

The UI took the screenspace data from the endpoint and converted it into a CSS transform that used the translate() function to shift the nameplate to the correct place on screen. This approach for positioning elements on screen cost a fair amount of style recalculation time. There was an attempt to mitigate this by using the <PerfJail /> component to isolate floaters onto their own separate rendering layer and while this helped it wasn't enough.

Ryan's first step in tackling this problem was trying to get the garbage collector under more control. The proposal was that instead of the UI getting screenspace locations for the elements that instead we would always ask for worldspace. Worldspace coordinates for things like static positions on the map never changed, and even for something like a character only changed as fast as the other character was moving.

Compared to the screenspace coordinates we were getting previously which would change every single time the local character or camera moved it was a drastic reduction in the amount of data being shipped to the UI every frame. For that information to be usable though we did have to add a new endpoint that would send the player's camera transformation over every frame, then it did a bit of math on the JS side to calculate the worldspace-to-screenspace transformation entirely in the UI. Overall this approach helped more than we had expected as it significantly dropped the number of large frametime spikes we were seeing, but it wasn't enough.

As we were trying to keep our perf work from putting features at risk we were tending to start small and then take bigger and bigger risks as we learned more about where our bottlenecks still were. After moving all the transform logic to JS it was clear that we needed to take a bigger swing at reducing the UI per-frame CPU costs. Ryan and I had discussed in the distant past an approach where the UI was no longer responsible for positioning floating elements. Instead on a separate view from the main UI we'd essentially draw a dynamic spritesheet of floating UI elements. Via an endpoint we could then tell C++ "draw this nameplate attached to this entity, here's the coordinates" and then while the UI stayed responsible for actually drawing the nameplate we could take advantage of C++'s lower latency on drawing to screen and lack of any need to care about style recalculation times.

With Bob Rost's help Ryan was able to put this system together and get it committed and we did see really shockingly large improvements in the overall UI speed. Ryan had to solve some interesting problems around packing the sprites as efficiently as possible into the dedicated sprite view and settled on using shelf-pack which worked excellently. Removing the strain of positioning all those elements every frame from Coherent GT let it (and the UI Engineers) do the things we were good at while delegating the stuff that had to be extremely fast to C++ which is very good at being fast. We still kept all the advantages of using web tech to draw our UI but also were able to use the raw speed of C++ for moving pixels around the screen.

Unfortunately due to Crucible's cancellation we never got to ship that change to customers.


← Home