Top Types for Code Connections

I just released an update for Code Connections, featuring a new mode which I’ve dubbed ‘Top Types’. Top Types mode tries to calculate the most important types in an entire solution, and lays them all out for you in the Code Connections dependency graph.

The idea for Top Types mode nucleated while I was developing v1 and trying to work out the best layout algorithm to use. I wanted to understand the typical averages and ranges for various graph properties for real .NET solutions, so I wrote some rough code to calculate those properties from the in-memory dependency graph and log them in Visual Studio.

One thing that struck me was that, when I extracted those statistics for codebases I was familiar with, the classes that scored highly on certain metrics were classes that were ‘important’ in some way. Two properties in particular were interesting. One was the number of dependencies a given class had: in graph terms, the number of outbound edges in the dependency graph. The other was the number of dependents it had, or the number of inbound edges.

Classes with a lot of dependencies by definition are referencing many other types. I found that the classes that had the most dependencies were the ‘manageresque’ classes that had the broadest set of responsibilities and the biggest share of the business logic.

Meanwhile, the classes with the most dependents seemed to be the key primitives, the most widely-used building blocks in the codebase.

What if this kind of analysis could be used, I wondered, to pick out the most important classes in any codebase, in an unguided way? And with that could you build a kind of skeletal structure that would help to understand the code as a whole, to see at a glance what an application or library is doing, and what’s doing the doing?

This idea sat at the back of my mind, but it returned to the forefront when I was trying to quickly come to grips with an unfamiliar codebase, and trying to understand what was important and how different classes related. Code Connections was already useful, but I found myself wishing I had that ‘important classes’ feature. That’s a pretty good sign that’s something’s worth doing.

Building the thing

How does an algorithm identify the key classes in a codebase? I have a lot of ideas on this topic, that I freely admit to not having had the time to implement. For Top Types v1, I’ve focused on building out the foundations of the feature based on the easiest metrics to calculate.

This includes number of dependents and number of dependencies that I’ve already mentioned, since these quantities basically come for free with the dependency graph. Lines of code (LOC) was also low-hanging fruit. You can show top-ranked types by any of these metrics in isolation, and there’s also a combined option that tries to be smart about using them to guess at the overall most important types in the graph.

How does it look? Here’s top types from the Entity Framework Core codebase, using the combined score mode:

You can try out Top Types mode in Code Connections with your own code right now. I’d love to hear your feedback!

Making Code Connections

Code Connections started out, as these things often do, as an attempt to address a specific, personal need.

I was working on a fun side-project, an early prototype of a puzzle game. I had been happily writing code, and had a whole pile of new or modified files. I was at the point where the logic was getting a bit fiddly, and I wanted to commit my work to source control before going further so that I had a restore point.

Now, I’m personally on the perfectionist end of the spectrum when it comes to Git. Partly by aesthetic preference, partly because having a well-structured source history to refer to really has saved me a ton of time, on more than one occasion. So I was looking at 100+ added and modified files, and wondering how I was going to wrestle them into a logical sequence of commits.

That’s when I had an idea. If I could just see the formal dependency relationship between all my classes, it’d be much easier to work out which changes logically depended on others. What if I made a Visual Studio extension that used Roslyn to show me a dependency graph of all my changes?

At the very least, it sounded like a fun project, so I decided to take a shot at it.

Populating a dependency graph in Code Connections.

VSIX 101

One thing I’d learnt from previous brief forays into authoring Visual Studio extensions, or VSIX packages, is that it’s surprisingly easy, at least at the beginning.

Tick the box to add the right workload, create a new project, and you’re away. Want to add a new window? There’s a helpful tutorial, there’s a wizard that adds the various bits for you, it all just works.

Oh, you wanted your VSIX to actually do something useful? That part is harder. Outside the brightly-lit basics covered in the getting-started tutorials, Visual Studio’s extension API is intimidating: vast, tersely documented, riddled with COM-isms, and layered with decades’ worth of redundant interfaces, making it hard to tell at times if any given type is obsolete or still current.

Fortunately, others have trodden the path. Most of the time there’s a StackOverflow answer, or a blog post. And there’s plenty of published extensions up on GitHub to peruse. (In fact the GitHub VS extension itself happens to be a particularly rich seam.)

Building the graph

Before we can visualize anything, we need something to visualize. The first step then is to build a model of the dependency relationships we’re interested in, which will take the form of a graph, with each type as a vertex in the graph, and a dependency of TypeA on TypeB as an edge from the TypeA vertex to the TypeB vertex.

The graph-building phase evolved considerably from my initial prototype through to its current form.

To extract any information, we need a Roslyn workspace[] for the solution that’s currently open. This is very easy to get from Visual Studio:

var componentModel = GetService(typeof(SComponentModel)) as IComponentModel;
var workspace = componentModel.GetService<VisualStudioWorkspace>();

My first ‘simplest thing that worked’ approach was the following algorithm:

  • start at a type (TypeA)
  • do a depth-first or breadth-first search to add dependencies (eg, types referenced by TypeA, types referenced by those types, etc)
  • repeat for other types of interest

The second step there, finding the types referenced by TypeA, is simply a matter of traversing the syntax tree (or trees) corresponding to TypeA provided by Roslyn and checking what type symbols are referenced in it.

This worked fine for my initial narrow vision which only included a fixed set of types in the graph (namely, those that had been locally modified in source control). If TypeA and TypeB were both modified, and TypeA depended directly or indirectly on TypeB, the algorithm above would pick it up.

But I quickly realised that the tool could be more broadly useful if I could say things like, ‘please visualize the connections of TypeA in both directions’, ie types referenced by TypeA and types that themselves reference TypeA. The only way to get that kind of information is to check every type in the solution. So that’s the approach I opted for. (The worst-case performance is anyways the same as for the simple algorithm, since it’s possible that one of your ‘root’ types may depend (directly or indirectly) on every other type in the solution.)

The current version of Code Connections in fact constructs two graphs. First, it builds a ‘model graph’ that contains every type in the solution (except some that may be filtered out, eg generated types, or types within manually-excluded projects) with their dependency. Second, it builds a (typically much smaller) ‘display graph’ containing only the types that will actually be visualised according to current settings.

Constructing the ‘model graph’ was fairly quick for my game prototype’s young codebase, but more time-consuming for large codebases. Initially, if the code changed anywhere, we would simply throw away that work and rebuild the whole graph from scratch.

Much better is to incrementally update the graph. If the code in a file is edited, then to a good approximation we can say that only the dependencies of the type or types defined in the file will change. So we only need to update the out-edges of the vertices for those types. I eventually succumbed to temptation and ended up implementing incremental graph updates, and it turned out really well.

(Why “to a good approximation”? There are a few cases where this isn’t true: the most significant I’m aware of being the case of code that’s initially in error. That is, say I’m editing AlreadyExists.cs and I decide I’ll need a new class, DoesntExistYet. In my AlreadyExists code, I add a call to DoesntExistYet.BrandNewMethod(). Now, subsequently, I actually create the DoesntExistYet class, and give it the BrandNewMethod() method. By so doing, I’ve now created a valid dependency relationship of AlreadyExists, without actually editing AlreadyExists.cs.)

Working with Git

Visual Studio has solid Git integrations out of the box, and I thought maybe it would expose internal Git-related APIs, but as far as I can tell it doesn’t. After looking at other Git-related VSIX packages (particularly GitDiffMargin), I opted to use LibGit2Sharp, a .NET wrapper for the libgit2 library.

It took some reorienting from my user-level mental model of Git towards libgit2’s lower-level API, but in the end it was pretty easy to do what I wanted, which for v1 was just to get a list of modified and added files.

Visualizing the graph

For visualizing the graph, I didn’t know much to begin with, other than that it was possible. I had no idea if it would be easy or hard.

Automatically generating a visually appealing 2-dimensional mapping of a set of vertices and edges is a long-standing topic of interest for computer scientists (not least in the context of producing nice figures for academic articles).

One venerable stalwart here is GraphViz, a  widely-used software package which provides both a number of routines for creating mappings, in various styles (hierarchal, circular, all blobbed together, etc), and also provides a standardized text format for defining graphs (both mapped and unmapped).

I was looking for a WPF library that I could easily incorporate into the Visual Studio extension. The first thing I found in my exploratory phase was a CodeProject project, Dot2WPF, which visualises GraphViz-formatted graphs within a WPF control. It supported mouse interaction with the elements, which was one thing I was looking for. I ran the sample, and indeed it seemed to do what I needed. I thought I was set.

When I got around to actually having output I wanted to graph, however, I found that Dot2WPF is somewhat limited. The problem was that the GraphViz text output format it supports is assumed to specify the Cartesian coordinates of the vertices. In other words, Dot2WPF assumes that the layouting problem has already been solved.

One option would be to use GraphViz itself for the layouting part, but the more I looked at that option the less I liked it. GraphViz didn’t seem to be available as a library, even a native library. I found one or two .NET wrappers, but they operated on the assumption that GraphViz was already installed by the user on their system. Installing GraphViz as a separate manual step might be acceptable for my own use, but it’d certainly limit adoption if I ever ended up with something I wanted to publicly release. I didn’t like the idea.

What then? GraphViz is open source; perhaps I could port one of its layouting routines from C? I didn’t relish the idea. My enthusiasm for the whole adventure was faltering.

I was too focused on one potential solution; it was time to pull back. I read the description of ‘NEATO’, one of the more useful GraphViz routines, which notes that it’s based on a 1989 paper by Kamada and Kawai. I searched for ‘Kamada and Kawai’; the sixth Google hit was a StackOverflow question, and the fourth answer was 1-line paydirt:

There is also http://graphsharp.codeplex.com which provides a number of layout algorithms for C#.

https://stackoverflow.com/a/22942336/1902058

GraphSharp turned out to be everything I wanted and more. Not only did it implement Kamada & Kawai’s algorithm and a number of other graph layouting strategies, but beyond that it also provided a WPF control for visualizing the results. When I tried out GraphSharp’s UI tooling, it was notably more feature-rich and polished than Dot2WPF. GraphSharp takes a slightly different approach to creating individual graph elements: where Dot2WPF used lightweight Visuals for better performance, GraphSharp used full-fat WPF controls; but the performance didn’t seem noticeably worse for large numbers of elements, and having real controls would anyway be advantageous if I were to want to customize the appearance or interactivity of the graph elements.

For defining input graphs, GraphSharp depended on QuickGraph, a library I’d already come across as a widely-used standard for general-purpose graph and network algorithms in .NET.

With GraphSharp for visualization, along with Roslyn and LibGit2Sharp, my proof-of-concept fell into place, and it wasn’t long before I could finally see all those 100+ changes in Git arranged visually by dependency relationship. It was a glorious mess, but the visual graph really did help me make sense of it. With the help of proto-Code-Connections, I organised my sprawling pile of changes into a useful commit sequence.

Connecting it all together

Finally I had a tool that scratched my immediate itch. Did I have something more than that on my hands?

I found myself working on something else and wishing I had Code Connections to help me, which was a good sign. I added a simple button that would add the current open document to the graph, so I wasn’t just restricted to looking at Git-modified files.

What would a more general-purpose tool look like?

Visual Studio already has a feature to map dependencies; I’d tried it a long time before, at one point when I had access to the Enterprise version. I loved the idea, but my recollection was that it took a long time to generate the map, the map was enormous, and by the time I had it I wasn’t quite sure what I wanted to ask it. I tried it once and then forgot about it. If I had to guess what it was for, it seemed like it was for printing out and pinning up next to the whiteboard while you had a long architectural argument about inheritance hierarchies.

So if I created a new tool it should be completely different in spirit, and I wanted something completely different anyway. I wanted something that would help me, selfishly, make sense of my code in my day to day. I spend a significant fraction of my professional life just trying to understand how everything fits together, how one class or method relates to all the others. If it could help me, maybe it’d help other people as well.

Some requirements: the tool should be fast. If I pose a question to it, I want an answer in seconds;  otherwise I’ll get tired of waiting.

The information it gave you had to be tractable. In terms of building the graph, that meant pulling in more information as you needed it – show me this type, now add its connections, now show me that type – rather than dropping a ton of information on you and making you drill down to what you were interested in. This obviously would rely upon it being fast.

Those top-level priorities I had formulated even before I had a proof-of-concept. But once I had a POC, I was able to get a better feel for what they meant, and also to get a sense of what the performance characteristics were, at least on an order-of-magnitude level, and what might be doable.

The UX of the POC was quite different from the way that the released version of the tool works. In the POC, some of the graph vertices were ‘roots’, and there was an adjustable ‘depth’ value to set how much graph to show beyond the roots: depth 1 would should the first-nearest neighbours of the roots, depth 2 would show first and second-nearest neighbours, and so on. It was an artifact of my initial forays into building the data model, as much as anything.

I scrapped it, in favour of the principle of simply giving the user various options for adding and removing types from the graph. Once I had implemented incremental updates and caching, modifying the displayed elements slightly was very quick.

It was so quick, in fact, that I decided to make the leap and default to including the current open document, and its connections, in the graph at all times. It turned out to work well. One nice thing about it is that the first ever time you open Code Connections, you generally see some content in the graph straight away, rather than getting a blank window. First impressions matter!

With the active document locked to the graph, the option to ‘pin’ additional types to the graph, and the Git mode, I felt like I had enough for a public release. And the rest is history, if polish and bugfixes count as history.

The unfinished product

That is how I got to Code Connections v1. I have all sorts of ideas for features I’d like the tool to have, from the fairly straightforward to the entirely not-straightforward. The guiding vision is to make it easier to make sense of your code while you’re writing the code.

Thanks for reading about the making of Code Connections! It’s free and open source – check out the code if you’re curious, and if it sounds useful, you can install it in Visual Studio and try it out right now.

Code Connections for Visual Studio

Code Connections started out as a quick tool when I was frustrated trying to understand all the changes I was making as part of a side project. As happens sometimes, it quickly grabbed my full attention, rudely shouldered the first side project out of the way, and finally has reached the point that I’m comfortable making it publicly available.

You can get it here.

It’s an extension for Visual Studio for Windows, for C# developers. The basic idea is that if you have this:

Code Connections gives you this:

Continue reading “Code Connections for Visual Studio”

Pros and cons of the code-markup hybrid approach

Previously I looked at the respective merits and shortcomings of dedicated markup and general-purpose code for authoring an application UI. In that post I alluded to some real-world approaches that didn’t fit within the definitions I adopted.

The most important style of approach that I omitted is the ‘code+markup hybrid’ approach – one where code and markup can be freely mixed in the same file/parse unit. Naturally enough this omission was noted in responses to the article:

Continue reading “Pros and cons of the code-markup hybrid approach”

Angled brackets, yay or nay? Markup vs code for UI

There’s a good-spirited but earnest debate currently about the ‘right way’ to write a UI-driven application. In many domains where specialized markup languages for authoring visual layouts have long been the dominant paradigm, newer frameworks are appearing which eschew markup completely, opting to declare UI purely in code.

Both the ‘markup’ approach and the ‘code’ approach have vocal proponents. Personally I don’t have a horse in the race – yet – but I do have thoughts on the benefits and limitations of each approach, and I wanted to get them down, mainly for my own benefit.

Let’s give some concrete examples first, in case it’s not clear what I’m talking about.

Examples of markup-based UI: HTML/CSS, Xaml (UWP+Uno/WPF/Xamarin.Forms), Android’s xml layout format

Examples of code-based UI: Flutter, React, SwiftUI, Comet (.NET)

Now we can talk about the relative merits of each approach.

Benefits of markup

Markup is structured

Retained-mode GUIs are typically implemented as a recursively-defined tree of objects, and UI markup formats are well-suited to defining a hierarchy of objects.

Consider the following UI layout, first in Xaml and then in C#:

Xaml:

<StackPanel>
	<TextBlock Text="Enter search string"/>
	<TextBox/>
	<Button Content="Search..."/>
</StackPanel>

C#:

new StackPanel()
{
	Children =
	{
		new TextBlock { Text = "Enter search string" },
		new TextBox(),
		new Button { Content = "Search..." }
	}
}

With such a simple layout, they’re not so different. Nonetheless I believe if I were to see them cold, in the middle of a file, it’d be almost immediately obvious for me what the first is describing, whereas it’d take me a few seconds to pin down the meaning of the second. The first has to be a hierarchical object declaration. The second could be anything – that’s the beauty of code! – but by that token it takes a moment longer to narrow down the vast possibility space of what it could be saying to what it is saying.

Markup is a domain-specific language

This point is really a generalization of the first: markup is a domain-specific language for UI declaration, with all the advantages (and drawbacks) that entails. The parser can do a lot of clever stuff that might not be appropriate in a general-purpose language, like context-sensitive implicit conversions, specialized syntax, implicitly understanding nested content, etc.

Inherent separation of concerns

Markup pushes you to keep your UI separate from other layers of your app because it can only do UI. I don’t find this point particularly compelling: we’re reliant on discipline and good dev culture to maintain separation of concerns in all the other layers of the app, so I don’t think artificially forcing that separation solely for UI makes much difference. But I guess it’s a minor point in markup’s favour.

Markup is tooling-friendly?

I include this one because it seems to’ve been an important argument historically for why markup is superior. The argument is that markup, with its inherent structure and reduced expressivity, would be more amenable to tooling support where you drag and drop UI elements into a WYSIWYG editor. The dream seems to’ve been that designers would author the bulk of an app in such a tool, a developer would come along and tweak the resulting markup output, and app development times would be slashed.

I don’t think that’s necessarily technically inaccurate – such tools do exist, like Blend and Xaml Designer. It just doesn’t seem to be particularly relevant. The dream never really panned out. WYSIWYG tools are great for static documents, but in an interactive application there are key aspects that in practice must be manually handled by an experienced front-end dev, like responsive layouts and virtualized lists to name a couple off the top of my head. And a tool and a human ‘collaborating’ on the same raw markup is not a pleasant experience for either, making it difficult to do further tool-aided edits after the markup has been hand-tweaked.

The trend in practice seems to be for designers to use designer-focused tools like Figma and Zeplin, and then to focus on improving the capabilities of such tools to export feedstock for front-end devs, like first-draft markup layouts, colour and text style resources, etc.

Meanwhile on the dev side, the increasing power of ‘hot reload’ capabilities in most modern UI frameworks is making build-time design tools increasingly redundant.

Performance optimizations

Just as the constrained structure of markup is beneficial for editing tools, it also potentially lends itself to pre-parsed intermediate formats which may offer particular performance benefits. UWP’s Xaml, for instance, supports the Xaml Binary Format (.xbf) which loads faster at runtime.

It’s probably more than that: I suspect the curtailed expressiveness of markup helps to steer UI authors away from performance-killing anti-patterns. It’s easy to shoot yourself in the foot, when it comes to UI and performance, and markup by no means makes it impossible; but the exposed API surface tries to guide authors towards the happy path. Virtualized ListViews in Xaml languages are a good example.

Benefits of code

Let’s turn to the advantages of declaring UI in code.

Code is Turing-complete

I don’t mean, like, you literally couldn’t implement a Turing machine in markup somehow. (Maybe you could, feel free to tell me how.) But what I mean is that code is capable of expressing arbitrary logical constructs, that’s basically code’s whole job, whereas markup struggles to do so.

Proponents of code show examples of UI snippets that are neatly expressed as code, but verbose and unwieldy in markup, often involving conditionally setting a property, or transforming a value. Proponents of markup usually concede that there are some things markup just can’t (or shouldn’t do); no one I’ve seen is really maintaining the position that a rich interactive application can be built only in markup. The pro-markup position is that the mechanisms for calling into code from markup, or vice versa, are adequate. The anti-markup position is that they aren’t worth the bother.

I would note that the interconnectivity between code and markup varies widely from one markup language to another. Xaml leans heavily into said interconnectivity, with the whole notion of ‘code-behind’ as well as mechanisms like value converters, template selectors, behaviours, etc.

I’m not sure where I stand on this one. Some of the ‘verbosity’ of markup seems more apparent than actual, but whenever I use, say, a value converter, it does feel like a lot of boilerplate. (You know that ‘boilerplate feel’… ugh.) There are innovations that try to address this, like UWP Xaml’s function binding feature, but they don’t yet go far enough.

Code is reusable

Code reuse is one of the major themes in the development of modern high-level programming languages, and one of the obsessions of the craft of software development. Code is reusable at the level of a one-line method, a million-line assembly, or absolutely anywhere in between.

Reuse is a problem for markup. Some languages, like HTML, have practically no ‘reuse story’ for functionality. I think this is why code-only UI frameworks gained ground earlier in web development with respect to other settings.

Xaml has a much better reuse story, with affordances like UserControls and control templates. But the boundaries of reuse are relatively fixed and inflexible; and passing information into or out of a ‘block’ of reuse can be tedious, at times arcane. It’s a painful choice at times whether to refactor a Xaml app for greater reuse, or accept the markup duplication in exchange for a more sane architecture. Code wins this round.

Better IDE support

IDE features, be it an open-source or closed-source IDE, are driven by customer demand, and customer demand is proportional to the volume of customers.

As we noted, you can have code with no markup, but you can’t have markup with no code. It follows, then, that code will always have better IDE support than markup, because the set of all users of a given markup language will always be a subset of the users of the associated coding language.

The IDE support for markup is not necessarily bad – Visual Studio for Windows actually has pretty nice support for Xaml. But Visual Studio’s C# support is amazing. And if we look at other popular IDEs, Visual Studio Code to take one example has good C# support but minimal understanding of Xaml. This will hopefully improve in the future, but it seems likely that IDE support for specialized markup is always going to lag behind support for the associated coding language.

You don’t have to learn a new language

Given a programming task, many developers, not unreasonably, prefer to complete it using a language they already know and are familiar with, rather than one they never touched before. Many, moreover, are not full-time front-end developers, but still want to be able to throw together a GUI-driven application when the need arises.

Some developers judge specialized markup languages to be an unnecessary cognitive burden, and would rather write UI in the general-purpose coding languages they already know.

I am well-steeped in Xaml after years working on the Uno Platform, but I know how this feels. Specialized syntaxes serve to demarcate and perimeterize areas of expertise. I know when I see a YAML file, it’s as if I’m seeing a battered wooden board with a skull-and-crossbones and “THIS IS DEVOPS TERRITORY” scrawled on it.

This is a valid shortcoming, then. The negative of being an ‘extra’ syntax to learn is something that markup has to outweigh with other positives, if it’s to be worthwhile.

Good tooling can go some way to alleviate the cognitive burden of a new language, by catching errors and guiding you toward the happy path. This couples into the area of IDE support already mentioned. The issue specifically of error-checking is trickier for Xaml than for a static-typed language like C#, since on Windows the bulk of the Xaml parsing takes place at runtime. It’s further compounded in UWP and WinUI by the .NET/WinRT boundary, which can lead to frustratingly opaque errors.

Ok then!

I wanted to point a couple of other UI approaches that don’t match the ‘markup’ definition here but aren’t code either, but perhaps another time.

Before I started writing this post, I didn’t have a strong opinion on whether markup or code was the ‘right answer.’ Having written it, I’m more convinced that there is no right answer. Each approach has inherent strengths and weaknesses. A specialized UI markup syntax is essentially a domain-specific language, which can be a powerful tool, but brings an additional knowledge burden and has to avoid the risk of being a second-class citizen in the tooling ecosystem.

I wrote this post mainly with an ‘app developer hat’ on. To wear a ‘framework designer hat’ for a second: it’s obvious that many individual developers have a strong ‘gut level’ preference for one model or the other. Is it possible for a single framework to please them both? But that’s perhaps a fitting subject for a separate post.

Lifetime management in Unity with UniRx and IDisposables

Previously we discussed the Model-View-Controller pattern as it applies to games, and how C#’s events allow you to pass information between layers without creating dependencies.

In this post I’m going to expand upon the pain point I touched upon at the end of the previous post – managing lifetimes for event subscriptions so that you don’t unwittingly create memory leaks.

Enter UniRx, a Unity implementation of the Reactive Extensions library.

Continue reading “Lifetime management in Unity with UniRx and IDisposables”

View/model separation in Unity using events

In the last post I discussed the Model-View-Controller pattern and sketched a Unity-specific implementation for it. In this post I want to dive into the details. In particular I want to talk about the ‘observer’ pattern and the subtleties of using C# events. In a subsequent post I’ll cover how reactive extensions via the UniRx library can make your life easier.

Did you change yet? Did you change yet? Did you change yet?

One of the conceptual challenges of Model-View-Controller separation or any layer-based architecture, particularly for newcomers to programming, is how to propagate information from layer A to layer B without A’s code referring to B’s code.

It’s tempting, with Unity’s model of explicit update loops, to simply check every ‘tick’ in an Update() method if some property in the game model changes. But we want to avoid this – it’s performance unfriendly when you have large numbers of objects, and moreover it’s just ugly.

One answer is the Observer pattern. In essence, a subject type provides a contract allowing any interested observers to say ‘notify me whenever such-and-such happens.’ In our case, the game model is the subject and the controller layer is the observer.

C# implements the Observer pattern as a first-class language feature via ‘event’ declarations. For example:

public class Terrain
    {
        private int _terrainElevation;
        public int TerrainElevation
        {
            get { return _terrainElevation; }
            set
            {
                var elevationHasChanged = value != _terrainElevation;
                _terrainElevation = value;
                if (elevationHasChanged && ElevationChanged != null)
                {
                    ElevationChanged(value);
                }
            }
        }
        public event Action<int> ElevationChanged;
    }

    public class TerrainController : MonoBehaviour
    {
        private Terrain _terrain;

        public TerrainController(Terrain terrain)
        {
            _terrain = terrain;
            _terrain.ElevationChanged += OnElevationChanged;
        }

        private void OnElevationChanged(int newElevation)
        {
            UpdateGameObjectPositionForNewElevation(newElevation);
        }
    }

Now TerrainController will be notified whenever the TerrainElevation property changes, and it can adjust its view accordingly.

The beauty is that anybody can subscribe to the ‘ElevationChanged’ event as long as they have a Terrain object. This achieves the layer separation that we talked about: the Terrain object in the game model layer doesn’t ‘know about’ the TerrainController in the controller layer. This makes development much easier: when you change your game logic, you just change the code pertaining to game logic – you don’t have to make a bunch of changes to your display code.

Once in a lifetime

So we’re golden, right?

Not quite; there’s one essential consideration we’re neglecting, which is lifetime management.

When you subscribe to the event on Terrain, you effectively pass a reference to TerrainController, so that its callback can be called. Now, remembering that Terrain is a Component attached to a Unity GameObject, what happens if you unload the scene it’s in – perhaps because you’re navigating back to the menu screen, say? You’re expecting TerrainController to be destroyed, but Terrain (which is sitting in your model layer, not going anywhere) still has a reference to it. We’ve created a memory leak.

The good news: there’s an easy fix. We simply unsubscribe from the event:

    public class TerrainController : MonoBehaviour
    {
        …

        private void OnDestroy() {
            _terrain.ElevationChanged -= OnElevationChanged;
        }
    }

Voila, leak patched, crisis averted.

The bad news? Well, if you have more such event subscriptions, or if deciding when they should be unsubscribed is a bit more complicated, it rapidly becomes easy to accidentally introduce bugs. Even writing the example just now, I almost forgot to change the + to a – when I copy-pasted.

In the next post, I’ll discuss a nifty toolset for making lifetime management less error-prone.

Model-view-controller architecture in Unity

This is going to be a more technical series of posts. I’m going to look at what’s going on under the hood in Delugional, and some of the approaches I took to keep the code manageable.

DelugionalOutside the world of games, the longstanding best practice for building a user interface is the ‘Model View Controller’ or ‘MVC’ pattern. This pattern has spawned many variants, but the basic principle remains the same. You have a ‘model’ layer, the raw information that your application is concerned with. (Eg, this puzzle has water here and some huts here, and this many hexes, and so on.) You have a ‘view’ layer, which is responsible for showing stuff to the user, and accepting input. And finally you have a ‘glue’ layer which is responsible for passing information between the model and the view. The golden rule is that the model knows nothing about the view layer, and the view layer knows nothing about the model. The ‘glue’, implemented in various ways and known as a ‘Controller’ or a ‘Presenter’ or a ‘View Model’, exists to allow and enforce this separation.

The advantage of this approach is simple and profound: you can make changes to your model without changing your view layer, and vice versa. This isn’t such a big deal in the early stages of development, but it saves you a huge amount of pain on a large project.

Does this pattern transfer to game development – or, specifically, to Unity3D development? I admit it must be tricky to apply to, say, a 3d shooter, where the ‘model’ (eg, the game physics) and the ‘view’ (eg, Unity’s rendering) are closely intertwined at an engine level. But for something like Delugional – a puzzle game with a turn-based mechanic – it’s eminently achievable. I’ll explain how I did it. Continue reading “Model-view-controller architecture in Unity”