Project Highrise – May 2015 Architect’s Notes

by matt v, May 19th, 2015

A world which sees art and engineering as divided is not seeing the world as a whole. Sir Edmund Happold

Let’s continue to meet the people that will work in your Project Highrise skyscraper.

Meet the crew.

Last month, we introduced the office workers of Project Highrise. Now let’s meet the construction workers:

construction-female construction-male

One of your main jobs in Project Highrise will be as general contractor of your growing tower. These hardworking men and women are at your beck and call everyday. It’s their job to build everything in your new tower – from floors to elevators to offices to pipes. It will be up to you to manage this crew and determine the what, where, when and how of the tower’s construction.

Starting out from scratch on a brand new skyscraper, you’ll be presented with empty ground. And a construction trailer. This is where your construction team will be based as your skyscraper rises. If you have nothing for your team to do or your cash flow is a little light that day, they’ll be found hanging out here, going over blueprints and examining coffee cups.

blueprint-may19

So, exactly how do you build a skyscraper?

Floor by floor.

Before you can ink that first deal for an office rental or welcome your first resident, you’ll need a strong steel floor to hold it all up. Individual floor tiles are the foundation of your tower in Project Highrise. Once you’ve marked out your first piece of floor your crew will start building them piece by piece.

You’ve got floor.

Now that your crew has completed construction of a section of floor, you’ve got to decide what to do with it. Is it going to be office space? Part of your food court? A hotel room? A penthouse apartment?

You divide up space on your newly-built floor and the “Coming Soon” signs spring up.  Your team of construction tradespeople will arrive to start turning raw concrete, exposed conduit, and bare iron beams into places for people to live and work.

With the build-out of the space completed, it will be ready to start generating income for your tower. You’ll depend on the rent from offices and apartments and the income generated by restaurants, services and hotels to finance your growing tower.


Happy Birthday, 1849!

by matt v, May 8th, 2015

We released 1849 on May 8, 2014. It’s been a great year and we’ve been humbled by the response to our first simulation game. Thank you, everyone.

In my family – and I suspect many others – whenever there is a celebration of a significant birthday or graduation or wedding, we like to break out embarrassing photos of the person we’re celebrating. It’s fun to look back and see your cousin covered with icing or wearing silly clothes.

With today being 1849’s first birthday, I think it’s only right to continue that tradition and share some of 1849’s baby photos.

So here is the game as it looked on May 3, 2013, about a year before release. We had some rad programmer art, right?

2013-05-03

But they grow up so fast. By July of 2013, with Eddie newly on the team, we had our first real Gold Rush buildings in the game.

screenie

And we were hard at work creating the 49ers to live in them.

npc-outfit-samples

And by May 8, 2014, the Gold Rush had arrived.

capitol

To celebrate 1849’s 365th day of existence, the game is 50% off on PC, iOS and Android until Sunday, May 10.

 


C# memory and performance tips for Unity

by robert, April 30th, 2015

There’s a lot of useful information out there about memory and performance optimizations in Unity. I have myself relied heavily on Wendelin Reich’s posts and Andrew Fray’s list when getting started – they are excellent resources worth studying.

I’m hoping this post will add some a few more interesting details, collected from various sources as well as from my own optimization adventures, about ways to improve performance using this engine.

The following specifically concentrates on perf improvements on the coding side, such as looking at different code constructs and see how they perform in both speed and memory usage. (There is another set of perf improvements that are also useful, such as optimizing your assets, compressing textures, or sharing materials, but I won’t touch those here. Good idea for another post, though!)

First, let’s start with a quick recap about memory allocation and garbage collection.

 

Always on my mind

One of the first things gamedevs always learn is to not allocate memory needlessly. There are very good reasons for that. First, it’s a limited resource, especially on mobile devices. Second, allocation is not free – allocating and deallocating on the heap will cost you CPU cycles. Third, in languages with manual memory management like C or C++, each allocation is an opportunity to introduce subtle bugs that can turn into huge problems, anywhere from memory leaks to full crashes.

Unity uses .NET, or rather its open source cousin, Mono. It features automatic memory management which fixes a lot of the safety problems, for example it’s no longer possible to use memory after it has been deallocated (ignoring unsafe code for now). But it makes the cost of allocation and deallocation even harder to predict.

I assume you’re already familiar with the distinction between stack allocation and heap allocation, but in short: data on the stack is short-lived, but alloc/dealloc is practically free, while data on the heap can live as long as necessary, but alloc/dealloc becomes more expensive as the memory manager needs to keep track of allocations. In .NET and Mono specifically, heap memory gets reclaimed automatically by the garbage collector (GC), which is practically speaking a black box, and the user doesn’t have a lot of control over it.

.NET also exposes two families of data types, which get allocated differently. Instances of reference types such as classes, or arrays such as int[], always get allocated on the heap, to be GC’d later. Data of value type, such as primitives (int, string, etc) or instances of structs, can live on the stack, unless they’re inside a container that already lives on the heap (such as an array of structs). Finally, value types can be promoted from the stack to the heap via boxing.

OK, enough setup. Let’s talk a bit about garbage collection and Mono.

 

It’s a sin

Finding and reclaiming data on the heap that’s no longer in use is the job of the GC, and different collectors can vary drastically in performance.

Older garbage collectors have gained a reputation for introducing framerate “hiccups”. For example, a simple mark-and-sweep collector is a blocking collector – it would pause the entire program so that it can process the entire heap at once. The length of the pause depends on the amount of data allocated by the program, and if this pause is long enough, it could result in noticeable stutter.

Newer garbage collectors have different ways for reducing those collection pauses. For example, so-called generational GCs split their work into smaller chunks, by grouping all recent allocations in one place so they can be scanned and collected quickly. Since many programs like to allocate temporary objects that get used and thrown away quickly, keeping them together helps make the GC more responsive.

Unfortunately Unity doesn’t do that. The version of Mono used by Unity is 2.6.5, and it uses an older Boehm GC, which is not generational and, I believe, not multithreaded. There are more recent versions of Mono with a better garbage collector, however, Unity has stated that the version of Mono will not be upgraded. Instead they’re working on a long-term plan to replace it with a different approach.

While this sounds like an exciting future, for now it means we have to put up with Mono 2.x and its old GC for a while longer.

In other words, we need to minimize memory allocations.

 

Opportunities

One of the first things that everyone recommends is to replace foreach loops with for loops when working with flat arrays. This is really surprising – foreach loops make code so much more readable, why would we want to get rid of them?

The reason is that a foreach loop internally creates a new enumerator instance. In pseudocode, a foreach loop like this:

foreach (var element in collection) { ... }

gets compiled to something like this:

var enumerator = collection.GetEnumerator();
while (enumerator.MoveNext()) {
  var element = enumerator.Current;
  // the body of the foreach loop
}

This has a few consequences:

  1. Using an enumerator means extra function calls to iterate over the collection
  2. Also: due to a bug in the Mono C# compiler that ships with Unity, the enumerator creates a throwaway object on the heap that GC will have to clean up later.
  3. The compiler doesn’t try to auto-optimize foreach loops into for loops, even for simple List collections – except for one special-case optimization in Mono that turns foreach over arrays (but not over Lists) into for loops.

Let’s compare various for and foreach loops over a List<int> or an int[] of 16M elements, adding up all the elements. And let’s throw in a Linq extension in there too.

(The following measurements are taken using Unity’s own performance profiler, using a standalone build under Unity 5.0.1, on an Intel i7 desktop machine. Yes, I’m aware of the limitations of synthetic benchmarks – use these as rough guidelines, always profile your own production code, etc.)

Right, back to the post…

// const SIZE = 16 * 1024 * 1024;
// array is an int[]
// list is a List<int>

1a. for (int i = 0; i < SIZE; i++) { x += array[i]; }
1b. for (int i = 0; i < SIZE; i++) { x += list[i]; }
2a. foreach (int val in array) { x += val; }
2b. foreach (int val in list) { x += val; }
 3. x = list.Sum(); // linq extension

                              time   memory
1a. for loop over array .... 35 ms .... 0 B
1b. for loop over list ..... 62 ms .... 0 B
2a. foreach over array ..... 35 ms .... 0 B
2b. foreach over list ..... 120 ms ... 24 B
 3. linq sum() ............ 271 ms ... 24 B

Clearly, a for loop over an array is the winner (along with foreach over arrays thanks to the special case optimization).

But why is a for loop over a list considerably slower than over an array? Turns out, it’s because accessing a List element requires a function call, so it’s slower than array access. If we look at the IL code for those loops, using a tool like ILSpy, we can see that “x += list[i]” really gets turned into a function call like “x += list.get_Item(i)“.

It gets even slower with Linq Sum() extension. Looking at the IL, the body of Sum() is essentially a foreach loop that looks like “tmp = enum.get_Current(); x = fn.Invoke(x, tmp)” where fn is a delegate to an adder function. No wonder it’s much slower than the for loop version.

Let’s try something else, this time the same number of elements only arranged in a 2D array, of 4K arrays or lists each 4K elements long, using nested for loops vs nested foreach loops:

                                      time    memory
1a. for loops over array[][] ......  35 ms ..... 0 B
1b. for loops over list<list<int>> . 60 ms ..... 0 B
2a. foreach on array[][] ........... 35 ms ..... 0 B
2b. foreach on list<list<int>> .... 120 ms .... 96 KB <-- !

No big surprises there, the numbers are on par with the previous run, but it highlights how much memory gets wasted with nested foreach loops: (1 + 4026) x 24 bytes each ~= 96 KB. Imagine if you’re doing nested loops on each frame!

In the end: in tight loops, or when looping over large collections, arrays perform better than generic collections, and for loops better than foreach loops. We can get a huge perf improvement by downgrading to arrays, not to mention save on mallocs.

Outside of tight loops and large collections, this doesn’t matter so much (and foreach and generic collections make life so much simpler).

 

What have I done to deserve this

Once we start looking, we can find memory allocations in all sorts of odd places.

For instance, calling functions with a variable number of arguments actually allocates those args on the heap in a temporary array (which is an unpleasant surprise to those coming from a C background). Let’s look at doing a loop of 256K math max operations:

1. Math.Max(a, b) ......... 0.6 ms ..... 0 B
2. Mathf.Max(a, b) ........ 1.1 ms ..... 0 B
3. Mathf.Max(a, b, b) ...... 25 ms ... 9.0 MB <-- !!!

Calling Max with three arguments means invoking a variadic “Mathf.Max(params int[] args)“, which then allocates 36 bytes on the heap for each function call (36B * 256K = 9MB).

For another example, let’s look at delegates. They’re very useful for decoupling and abstraction, but there’s one unexpected behavior: assigning a delegate to a local variable also appears to box it. We get a spurious heap allocation even if we’re just storing the delegate in a temporary local variable.

Here’s an example of 256K function calls in a tight loop:

protected static int Fn () { return 1; }
1. for (...) { result += Fn(); }
2. Func fn = Fn; for (...) { result += fn.Invoke(); }
3. for (...) { Func fn = Fn; result += fn.Invoke(); }

1. Static function call ....... 0.1 ms .... 0 B
2. Assign once, invoke many ... 1.0 ms ... 52 B
3. Assign many, invoke many .... 40 ms ... 13 MB <-- !!!

Looking at IL in ILSpy, every single local variable assignment like “Func<int> fn = Fn” creates a new instance of the delegate class Func<int32> on the heap, taking up 52 bytes that are then going to be thrown away immediately, and this compiler at least isn’t smart enough to hoist the invariant local variable out of the body of the loop.

Now this made me worry. What about things like lists or dictionaries of delegates – for example, when implementing the observer pattern, or a dictionary of handler functions? If we iterated over them to invoke each delegate, will this cause tons of spurious heap allocations?

Let’s try iterating and executing over a List<> of 256K delegates:

4. For loop over list of delegates .... 1.5 ms .... 0 B
5. Foreach over list of delegates ..... 3.0 ms ... 24 B

Whew. At least looping over a list of delegates doesn’t re-box them, and a peek at the IL confirms that.

 

Se a vida é

There are more random opportunities for minimizing memory allocation. In brief:

  • Some places in the Unity API want the user to assign an array of structs to a property, for example on the Mesh component:
    void Update () {
      // new up Vector2[] and populate it
      Vector2[] uvs = MyHelperFunction();
      mesh.uvs = uvs;
    }
    

    Unfortunately, as we mentioned before, a local array of value types gets allocated on the heap, even though Vector2 are value types and the array is just a local variable. If this runs on every frame, that’s 24B for each new array, plus the size of each element (in case of Vector2 it’s 8B per element).

    There’s a fix that’s ugly but useful: keep a scratch list of the appropriate size and reuse it:

    // assume a member variable initialized once:
    // private Vector2[] tmp_uvs;
    //
    void Update () {
      MyHelperFunction(tmp_uvs); // populate
      mesh.uvs = tmp_uvs;
    }
    

    This works because Unity API property setters will silently make a copy of the array you pass in, and not hold on to the array reference (unlike what one might think). So there’s really no point in making scratch copies all the time.

  • Because arrays are not resizable, it’s often more convenient to use List<> instances instead, and then add or remove elements as necessary, like this:
    List<int> ints = new List<int>();
    for (...) { ints.Add(something); }
    

    As an implementation detail, when a List is allocated this way using the default constructor, it will start with a pretty small capacity (that is, it will only allocate internal storage for a small number of elements, such as four). Once that is exceeded, it will need to allocate a new larger chunk of memory (say, eight elements long), and move them over.

    So if game code needs to create a list and add a large number of elements, it’s better to specify capacity explicitly like this, even overshooting a bit, to avoid unnecessary re-sizing and re-allocations:

    List<int> ints = new List<int>(expectedSize);
    
  • Another interesting side-effects of the List<> type is that, even when it’s cleared, it does not release the memory it has allocated (ie. the capacity remains the same). If you have a List with many elements, calling Clear() will not release this memory – it will just clear out its contents and set the count to zero. Similarly, adding new elements to this list will not allocate new memory, until capacity is reached.

    So similarly to the first tip, if there’s a function that needs to populate and use large lists on every frame, a dirty but effective optimization is to pre-allocate a large list ahead of time, and keep reusing it and clearing after each use, which will not cause the memory to be re-allocated.

  • Finally, a quick word about strings. Strings in C# and .NET are immutable objects, so string concatenation generates new string instances on the heap. When assembling strings from multiple components, it’s usually better to use a StringBuilder, which has its own internal character buffer and can create a single new string instance at the end. Any instances of code that are single-threaded and not re-entrant could even share a single static instance of the builder, resetting it between invocations, so that the buffer gets reused between invocations.

 

Was it worth it?

I was inspired to collect all of these after a recent bout of optimizations, where I got rid of some pretty bad memory allocation spikes by digging in and simplifying code. In one particularly bad case, one frame allocated ~1MB of temporary objects just by using wrong data structures and iterators. Relieving memory pressure is especially important on mobile, since your texture memory and your game memory both have to share the same, very limited pool.

In the end, this list is not a set of rules set in stone, they’re just opportunities. I actually really like Linq, foreach, and other productivity extensions, and use them often (maybe too often). These optimizations only really matter when dealing with code that runs very frequently or deals with a ton of data, but most of the time they’re not necessary.

Ultimately, the standard approach to optimization is right: we should write good code first, then profile, and only then optimize actual observed hot spots, because each optimization reduces flexibility. And we all know what Knuth had to say about premature optimization. :)

 


How to tune a simulation game

by robert, April 23rd, 2015

A while back I got asked a question about tuning simulation games. Another developer wanted to know: is there a better way than just tweaking and playtesting? They were working on a simulation title of their own, and dreading the cycle of tweaking numbers and playing and tweaking numbers some more, especially since the cycle wasn’t very fast – simulation tuning problems tend to only show up after a while of playing, and a turbo mode cheat only gets you so far.

This is something we dealt with recently, as we were balancing (and rebalancing) our game 1849, which is a gold rush era city builder / management tycoon game. So I figured I’d repost my answer here as well, in case it helps someone else in the future.

First, a bit of background: 1849 is all about managing the city’s economy. There are about 50 different buildings, and 20 different resources – and each building can produce raw resources (wheat field produces wheat, hunting camp produces leather and meat), convert them (brewery converts barley into beer, gold mine produces gold but consumes pickaxes in the process), or consume them (houses where people live consume all sorts of food and goods resources). On top of that, you have to pay workers’ wages at each resource building, so you want to make sure it’s not sitting idle, otherwise you lose money. Finally you get money by having houses buy resources and consume them (or by trading with neighboring cities). So most of the game is all about optimizing your resource chains and production logistics, and not producing too little or too much.

During the design phase, we mapped out all resource chains as a big graph, to make sure there were no surprises. I no longer have a photo of that anywhere, but it was exactly what you’d expect: a directed acyclic graph, where each node was a building, and each edge was a resource (eg. wheat edges going from wheat farm to the bakery and to the distillery). We did this to verify that we had a variety of chains from sources to sinks, some shorter and some longer, as well as at least one feedback loop (in our case, iron mine consumes pickaxes and produces iron ore, smelter converts that into iron, and blacksmith converts that into pickaxes, which get consumed again by all mines).

Once we settled on resources and buildings, we had to figure out our tuning values: how much wheat is produced per turn? or how much wheat do you need to make one unit of bread? If the values are too generous, the player accumulates a surplus and the game becomes too easy; if they’re too hard, you can fall into a “death spiral” where the player keeps running out of everything, and the workers move out, causing the economy to collapse.

The “brute force” approach would be to just make up some tuning values, and play the game a bunch of times and tweak. That can work sometimes but it’s slow, and we wanted to do it much faster. So we turned to the game designer’s best friend: Microsoft Excel. :) (And I’m not joking about the “best friend” part.)

We built stationary models of a number of test cities – some for small cities, some for large ones, some with specific building combinations in them. By a stationary model, I mean: a model of how much of each resource is produced and consumed on a single turn.

First, there was a master sheet that listed all buildings and all of their tuning values (production and consumption levels, cost, population produced or required):
tuning1

Then there was one sheet per simulated city, which lists all the buildings and how many instances of each we expect to be built:
tuning2

Each city sheet would then pull tuning values from the master spreadsheet, and calculate all resource consumption and production, as well as how much money you’re making or losing, how many workers you need (and therefore whether you have enough houses) and so on. As long as all the numbers stayed around zero, the city was pretty well balanced. If they went too far into positive or negative, they would get highlighted in red or green on the spreadsheet, and it was a signal that this part might need extra attention.

This also made tuning almost instantaneous: if you changed a tuning value in the master spreadsheet, it would propagate intantly to all city spreadsheets, and you can see right away if this helped or harmed any of the test scenarios.

So that’s how we did tuning of all buildings and resources together in a single city. Then to set up difficulty progression between cities in a campaign, we did what a lot of tycoon games do: as the player goes through the game, increase sinks and resource consumption, which puts pressure on the player to produce more and more (while also giving them more money to work with). This was also verified with the tuning spreadsheets – we could tune the master sheet and immediately check how that’s affecting various cities in a campaign.

Also, there were additional details that turned into additional challenges, like adding one-shot timed quests for the player to complete, but those we usually tuned by brute force, since there were few of them and their effects were not as easy to model in our fairly simple spreadsheet.

Hope this is interesting – and maybe even useful!


Introducing Office Workers

by matt v, April 17th, 2015

We’d like to introduce some of the people that will populate the skyscrapers in Project Highrise.

Who are they?
Who is going to be occupying your building? What will they be doing? Those are two of the first questions that you, as the building architect, developer, and manager will have to answer in Project Highrise.

Residential towers are very different creatures from an office block. And a resort tower filled with hotel rooms is unlike either of those. If your challenge is to create a mixed use building, how will you balance the conflicting needs and desires of your two distinct population groups?

Your building will be full of people – there could be hundreds (we’re working on thousands) moving about your building, coming and going, eating and shopping, living and working. Each one of those people will have their own experiences. By and large your success as a developer and manager depends on making sure that those experiences are positive.

So, let’s meet a few of them.

If you’re creating a building to attract Class A office space, here’s your target audience:

officenpcs

They’re going to be coming to work every day in your tower. And office workers are creatures of habit and routine. Ensure that cafes are open in the morning, the food court has enough restaurants and that elevators and escalators are all running smoothly. Oh, and they hate waiting, too. If your cafes start to get crowded, you can be certain that you will hear about it from them.

They will also have different preferences for where they work. Some offices will want to be in high-traffic areas, near elevators or toward the ground floors. Others will want just the opposite – a quiet corner office far from the crowds and above all the traffic noise.

Will they all look like that?
Yes and no. The game will have every skin tone natural to the human race. And maybe some hair colors that aren’t. Clothing will also change color. There are male and female versions of every character in the game.

In addition to walking around the building, they’ll also have a bunch of other actions such as working at their desks or getting irritated when the line at their favorite sushi place is too long.

We’ll be introducing you to more building occupants as the months go by.


Minimum Sustainable Success? Know Your Context.

by robert, April 14th, 2015

I read Dan Cook’s recent gamedev sustainability polemic with anticipation and interest, but I couldn’t make heads and tails of it for a while. Yes, we need to be realistic about projecting revenues, I could not agree with that more! But then the details got murky.

The crux of Dan’s argument seemed to be that: 1. the chance of failure of any individual project is very high, so 2. we need to make tons of projects for a chance of a breakout success, and 3. don’t think a single success is worth very much because it has to pay for past and future failures as well. To explain this claim, he then does a neat walk-through of how many games they made at Spry Fox in the last 5 years (31 prototypes, 11 released projects), and how many actually made money (4 broke even, 3 were successes that paid for the failures).

Except this part made zero sense to me when I read it. Who can make 31 prototypes and release 11 games in 5 years? What time travel dark magic is this? :)  Because in my neck of the woods, it takes at least a year to make a game, and even that’s when you’re very disciplined about sticking to the schedule. Writing a game is a commitment closer to writing a book – and it takes just as much out of you.

Finally it dawned on me. That talk should have been titled differently: “Minimum Sustainable Mobile Success”. And then – then it makes much more sense. Make a ton of little games. Throw them at the wall and see what sticks. Don’t bet your farm on any single title because your success depends entirely on the roll of dice that is the app store featuring and the audience’s fickle response. And don’t make a premium title.

Except… this advice is not at all applicable to the PC market. You know what happens if you try to make a game in a few months and release it? Nothing. It won’t pass Greenlight, it won’t show up on Steam, it won’t get an audience. It might show up on itch.io, along with a ton of other minigames that people play for free, but that’s not a way to build revenue. The PC market is a very different beast, and it demands games that are larger, more fully developed, that have more meat on their bones. And players are willing to pay a premium price, if they get what they paid for.

So the tactics for indie survival in the PC world are going to be very different. You still run the risk of the game not doing well, but the shotgun approach is only going to backfire, because that’s not what the audience wants.

Instead, you have to start thinking less like a pop artist releasing singles on iTunes, and more like a book author committing a year or more of their life to a single piece of work. Instead of mitigating risk via quantity, mitigate risk via quality. Just start with the assumption that your game is not going to be a blockbuster – so what can you do to make sure it’s not an abject failure either, and gives you another swing at greater success next time? (And this metaphor is not ideal either, because success is not static, it changes as you acquire a reputation as a creator.) You have to find your fans, and in order to do that, understand who you’re writing for, and understand what is interesting about your game and why they should care. And just as importantly, figure out where your audience is, how to get your game in front of them, and also how many will (realistically) want to pay for it.

The point of all this is – if you’re not on mobile, making larger-scoped games, the shotgun approach is not viable. A portfolio of games is a great goal, but as an indie developer it will take you many years to get there. So don’t go wide. Work on minimizing the risk of failure of each particular title, and on building your portfolio over the long term.

As with everything in life, context is key.

 


Our Next Game – Project Highrise

by matt v, March 19th, 2015

What is it?

In Project Highrise, you build and manage a modern-day skyscraper – a vertical ecosystem of offices, businesses and residences.

blackandwhite-web

A skyscraper is an intricate machine of interlocking systems, which depend on each other in their daily function. Tenants will expect everything to just work, and to have all of their needs met under one roof. It’s your job to keep this machine running smoothly and efficiently. Keep your ear to the ground, but your eye to the future.

Promo_Office_web

When can I play?
Early 2016. We’re planning for an early access in late summer or early fall 2015.

building-web

More info?
Learn more about Project Highrise and sign up for our emails here:
www.somasim.com/highrise


Slides from Matt’s Talk at GDC 2015

by matt v, March 13th, 2015

Here’s a PDF of the talk that I gave at GDC 2015. The title of the talk was “How to Make and Self-Publish a Game in 12 Months“.

Quick Summary:
Making your first game as a new indie studio means that on top of actually making the game, you’ve got to do pretty much everything yourself. Self-fund, self-market, self-publish, self-incorporate, self-everything. This talk will chronicle that crucial first year as we made our first game and prepared if for a cross-platform PC and tablet release. There were some great moments and some harsh lessons in that year, with great advice from other game dev friends and mentors. This presentation will share everything that we learned.

Download/view the PDF here.


What we’re playing: Memoir ’44

by robert, January 4th, 2015

I’ve been playing a bit of Memoir ’44 lately. It’s a really well done WWII boardgame, with a light ruleset and short sessions. I’ve seen it compared to Advance Wars or Final Fantasy Tactics, which sounds pretty spot on, actually.

Like many war games, it takes place on a hex board that gets set up differently for each scenario, so you end up with a variety of terrain modifiers (forests, hills, towns) and units spread all over the map, depending on which battle of WWII is being reenacted. The game booklet comes with a number of good scenarios that have different setup and victory conditions, and even though they’re not symmetrical (because neither were the historical battles), you can just play each short game twice, once as axis and once as allied powers, and then use total points from both to determine the winner.

But from a game design point of view, probably the coolest part about this game is actually their treatment of randomness and uncertainty. Costikyan’s book first clued me in to this, and it’s very interesting.

Quick overview: as a commander you control a bunch of units of different types (infantry, tanks, artillery) with different abilities. But you can’t just activate any unit you want – you have cards in your hand that list various possibilities, for example, “activate up to 2 infantry on the left side of the board”, or “no more than 3 tanks at any position”, or “up to 3 units of any type that are damaged”. On each turn, you play one of those cards (and pick up a new one), activate those units, and move and/or attack. Then attacks are done by rolling from one to four dice to determine the result.

So the first source of randomness is the tactical attack die roll, and that’s very easy to understand – the probabilities of a successful attack are easy to calculate, and terrain modifiers just reduce the number of dice you get to roll. It’s a pretty straightforward stationary source of randomness.

But the more interesting source of randomness are the command cards in your hand. The game is cleverly constrained – you can’t activate units unless you have a card for them, so if your base on the left side of the board is being attacked and you only have infantry units there, you better have a card that matches this situation (eg. “activate up to 2 infantry on the left side”), otherwise you’re toast. And of course the cards then become a strategic resource – do I play my left flank card now, or hold it until later for some purpose?

And since card decks are a non-stationary random process, that can also be gamed – you could in theory learn to count cards, to give yourself an edge. I say “in theory”, though, because each session is short enough, and the card deck large enough, that this is unlikely to matter much.

So this combination of two sources of randomness, one of which limits what you can do, and the other determines the outcome of what you did, gives the game a very nice flavor – I’m no longer an omnipotent commander that can move all units at will, instead I have to pick and choose between only a few options at a time, sometimes good but sometimes all unpalatable, and worry about not closing off my future trajectory by doing something unnecessary right now. Kind of like the real world, come to think of it.

 


1849: Nevada Silver is Now Available

by matt v, September 16th, 2014

1849logo_nevada

Today marks the release of 1849’s first expansion pack. So grab your pickaxe and mosey on over to the Comstock Lode where new fortune awaits in 1849: Nevada Silver.

capitol

Here’s some of what’s new in Nevada:

  1. Six new cities/scenarios! They should be pretty challenging – especially the last three once you’ve picked up the basics in the first couple of cities.
  2. There are trains! Now you’ll have to work with trains and their schedules for the delivery of crucial raw materials. The trading system in Nevada is completely redone and you can have some trades repeat as long as you have the cash or resources on hand when the train arrives.
  3. New resources and buildings have been added. Some of the buildings will require you to commit resources to their construction before you can use them. And this being Nevada, there’s a casino.
  4. The last two levels have landmark buildings to construct – the Virginia & Truckee Railroad in Virginia City and the Nevada State Capitol in Carson City.
  5. And a few other surprises await you in the Comstock Lode of Nevada…

Visit the 1849: Nevada Silver webpage to read more about the expansion pack and to find out where you can buy it.


Older Posts