Archive for category design
Diablo is one of the most critically and commercially acclaimed videogame franchises of all time. It has spawned numerous “clones,” and its gameplay conventions have been adapted across multiple genres. The latest entry in the series, Diablo III, boasts the honour of being the fastest selling PC game of all time and the best selling PC game of all time.
So what makes the series so special? Almost to a fault, Diablo games are described as being incredibly addictive. That’s a fairly vague assertion, though, so I figured it’d be interesting to take a closer look at the original game and get a little closer to nailing down its je ne sais quois.
From Roguelike to Diablo I
Aesthetically it was not only a big step up from ASCII-based visuals, but it also outshone most other CRPGs of the era. Diablo ran at a 640×480 resolution and featured a dark, Gothic world rendered entirely in CG. Everything in the game animated smoothly, playable characters changed appearance based on equipment, a rudimentary lighting system helped set the mood, and a plethora of spell effects and item icons created a much more impressive presentation than that of its dungeon-crawling brethren.
Diablo was also fully voiced, and its music — especially the theme of Tristram — is fondly remembered to this day. More subtly, the game featured excellent sound design. This is most evident in the item drops that are represented by a spinning graphical icon, a whooshing sound effect, and an additional audio clip played when the item hits the ground. The sounds differed based on the item type that was spawned, and were clearer, louder, and longer in duration than all the other sound effects in the game. These cues ensured that spawned treasures were rarely missed and greatly enhanced the Pavlovian effects of loot-hunting.
From a gameplay perspective, Diablo retained many defining aspects of a roguelike despite being a real-time game. Randomization was present in level layouts, monster type and placement, item generation, and even the available quests. Character classes shared the same basic statistics and could all learn a large swath of common spells, but possessed unique abilities and statistical progression. Shrines and consumable items provided further boosts and ailments, and inventory management was big part of the experience (even gold had to be accommodated for, one 5,000-coin stack at a time).
However, Diablo wasn’t quite as unforgiving as a typical roguelike. There was no instadeath, and saving/loading was allowed at any point. If the player died while playing online, they would respawn in town and get the chance to retrieve their equipment from their corpse. There was even an option to restart the game at any time, retaining all of the character’s armament and upgrades.
None of the items severely handicapped the player either, and it was possible to equip unidentified items to benefit from their statistical boosts while foregoing their special abilities. A superimposed minimap was an option as well, and compensated for the more zoomed-in viewpoint compared to typical roguelikes.
And of course there was multiplayer.
During the time of Diablo’s release, multiplayer usually meant configuring ports and IP addresses. Much like early server-browsers such as GameSpy Arcade and Xfire, Diablo’s Battle.net circumvented the issue by providing an easy graphical interface for getting online. The gameplay adjustments worked remarkably well too, with all monsters simply being given twice as much health for every player in the game. Granted friendly fire was always on, but this was still a very user-friendly online experience for the time.
All in all, Diablo was a very sleek package, and miles ahead of what one would expect from just a graphical roguelike.
In large part Diablo consisted of lumbering around Tristram selling loot, healing, swapping equipment, and getting ready for the next excursion. Since these pit-stops were not the meat of the game, I decided to limit my observations to the actual dungeon exploration. I played as a Warrior and endeavoured to kill every enemy, open every container, pick up every item, and generally explore every corner of every map before moving on to the next area.
As it turned out, 8 hours of hacking and slashing was just enough time to complete the whole game.
Below is a graph that illustrates the amount of time (in minutes) devoted to each map of the dungeon, how many trips it took to eliminate all the enemies and pick up all the loot, and how many experience levels my Warrior gained on each floor.
The purple marks indicate special maps that were not entirely randomized and served as arenas for boss battles.
Right off the bat it’s obvious that the amount of time required for each map climbed steadily, but still varied noticeably from floor to floor. The major differences were caused by the static maps, but even without these there was enough variation to prevent the experience from feeling wholly homogenized. The shortest map clocked in at 7.5 minutes and the longest at 31 minutes, with the average coming in at 19.4 minutes. However, it’s also worth noting that the variations were in part due to my Warrior refusing to use the teleport spell (making backtracking more time consuming in some layouts) and combat difficulty (needing to spend time kiting enemies and running away to recharge health and mana).
The amount of trips required to fully explore each floor was more consistent, but this is more indicative of the loot-hauling than anything else. Having played a sturdy character with a focus on health-regeneration, retreating to Tristram in order to recuperate wasn’t an issue until the very end of the game. Instead, the trips were a direct result of inventory limitations and my determination to collect every spawned item. The relative consistency of the amount of trips indicates that despite unique layouts, monster populations, treasure chest distributions, etc., each floor produced a similar number of items throughout the game. In total, the smallest amount of trips to clear a floor was 1 and the largest 5, with the average coming in at 3.1.
Finally, much like the amount of trips, the experience progression was fairly consistent. On average, 1.3 levels were gained for every dungeon floor. This is noteworthy as none of the statistical progression in Diablo involves randomness. Each level-up grants 5 points that the player can distribute among 4 attributes, and each character class has a specific starting and maximum number for each attribute. This results in all the uniqueness of a character build coming exclusively from items — via equipment and spells — thereby putting further emphasis on loot-drops.
Overall Diablo is split up into 4 zones, each one consisting of 4 dungeon floors. There’s a steady progression of new monsters and map objects as the player delves further underground, but the interesting part is that each zone has a slightly different approach to generating its layout. This helps to give all the areas a unique atmosphere and even has some implications on exploration and combat mechanics.
Another interesting factor is that the availability of quests is randomized as well. This is quite substantial as it means certain characters and bosses — and by extension entire maps — cannot be encountered in a single playthrough. Combined with the ability to start the game anew while retaining character progression, it’s clear that Diablo was designed with replayability in mind.
Item spawning is often credited as being Diablo’s “secret sauce,” but all of its algorithms have been meticulously catalogued by devoted fans. According to the compiled data, entities that drop items follow slightly different formulas that help give them defining characteristics: chests drop 0-3 treasures based on size, sarcophagi can contain hidden monsters, weapon racks always yield armaments, etc. However, it’s defeated enemies that make up the bulk of the loot. Let’s take a look:
*Some monster types (Winged Fiends and Hork Spawns) never drop items.
*Unique monsters always drop an item, and it can either be a book or an item that can take on a prefix and/or suffix (or be unique).
Well, that’s rather simple, isn’t it? Over half the time nothing drops. When it does, it’s usually gold. There’s only a 10.7% chance an item drops from a regular monster, and special ones always drop good loot.
Aside from quest MacGuffins, items are either consumable or equipable. Consumables consist of health and/or mana recharging potions, stat-raising elixirs, scrolls that cast single-use spells without draining mana, and magic books that teach spells for permanent use.
Equipable items are split into weapons, shields, armour, helms, rings, and amulets. Armour changes the appearance of the player avatar, shields increase the chance of blocking, weapons have different attack speeds and monster-family properties (e.g., swords do 150% damage to Animals, 100% damage to Demons, and 50% damage to Undead), while rings and amulets are indestructible and always Magic or Unique.
All of the equipment also belongs to one of the three main types:
Normal – Standard swords, shields, helmets, etc.; no special attributes.
Magic – Enchanted items shown in blue; can contain one of the aforementioned prefix and/or suffix modifiers.
Unique – Gold items with multiple non-randomized statistics and a set of special properties; most often obtained through completing quests.
Normal items follow a set progression table for each equipment category, e.g., the helms table looks like this: Cap->Skull Cap->Helm->Full Helm->Crown->Great Helm. Each sub-category contains a unique visual, damage/armour range, durability range, sell-price, etc.
Magic items are upgraded versions of normal items enhanced with one or two special properties. These properties include elemental damage/resistance, character attribute enhancement, hit chance boost, etc. Much like regular equipment, special properties can be split into categories and sub-categories, e.g., the added fire damage category contains the sub-categories of: Flame->Fire->Burning->Flaming, each one with a different extra-damage range.
Sub-categories also fall into a prefix or a suffix slot and get added to the actual name of the item, e.g., the Ivory Mace of Swiftness contains the “Ivory” prefix, granting 31-40 additional magic resistance, and the “Sorcery” suffix, increasing the player’s Magic attribute by 16-30.
It’s also worth noting that special properties can only be applied to certain categories of items, and some prefixes are incompatible with other types of suffixes.
… the highly-coveted suffix enchantment “of the Zodiac” (adds between 16 and 20 to all four Attributes evenly) is only available on rings and amulets, so the game engine cannot generate Shields of the Zodiac or Helms of the Zodiac.
…systemic limitations within the game mechanism prevent some prefixes and suffixes from appearing together on the same item. For example, the item “Godly Plate of the Whale” (abbreviated on Battle.net as “G.P.O.W.”) cannot be generated by any monster or vendor in the game.
Unique items can contain up to six special properties, but are not enhanced via the same mechanism as Magic items. Instead, each Unique item has a specific name, a potentially unique icon, and a custom set of statistics/special properties. For example, the “Gotterdamerung” helm has an armour class of 60, adds 20 to all character attributes, lowers damage by 4, drops all resistances to 0, and decreases the player’s light radius by 40%.
The actual formulae for generating items are quite complex and take into consideration the player’s level, type of spawn, location, existing items, and various other variables. Jarulf’s Guide offers a full breakdown of these, but here’s what the system produced during my playthrough:
It’s immediately obvious that consumables made up the largest group of spawned items. A total of 175 dropped throughout the game, with an average of 9.2 per map. As usual, the smaller static maps skewed this to be a bit lower than the median.
Consumables steadily declined throughout the game, and this is notable as it’s a fairly subtle way of adjusting difficulty. Early on in the game, health and mana potions are quite abundant in order to facilitate exploration and experimentation for newcomers. Later on, it’s expected that the player has a greater mastery of the game’s mechanics and needs to worker harder to maintain momentum.
Normal equipable items made up the second largest group of drops and, somewhat surprisingly, remained prevalent long after they had outlived their usefulness. Even in the last map of the game, Normals dropped quite frequently despite only being beneficial during the opening map or two. A total of 144 Normal items dropped during my playthrough, with an average of 7.6 per map.
Magic items didn’t start appearing until the second map, and from there on spawned regularly following a rough sine wave pattern. The 99 Magic drops in the game represented the bulk of the equipment changes, but only outnumbered Normal drops on a handful of maps with an average of 5.2 per floor.
Only 8 Unique items dropped throughout the whole game, an average of 0.4 per map, resulting in 4 equipment changes. This 50% equipment-change ratio might not seem that high for these powerful artefacts, but it’s a much higher ratio than that of Normal or Magic items: 2.1% and 8.1% respectively. What’s more, Unique items had the highest longevity out of all the equipables due to their usefulness, e.g., the Skeleton King’s Undead Crown obtained in map #4 was the helm I wore for the remainder of the game due to its life-leeching properties.
Overall 25 equipment changes took place in the game, but only 15 of these came from loot drops. This makes for a paltry 6.0% overall equipment-change ratio, which is all the starker considering various item slots were rarely swapped.
Economies in RPGs can be quite tricky. Their goal is to give the player plenty of paid options for goods and services without making the currency too rare or too prevalent. Diablo does a great job of hitting this goal as there’s always something to spend money on: identifying items, repairing equipment, recharging magic staves, or even getting a glance at one shopkeeper’s inventory. And of course there are the items themselves:
During my playthrough, consumables made up the bulk of the purchasables and climbed sharply towards the end of the game. The main reason for this was the need to stock up on health/rejuvenation potions in order to plow through the tougher monster encounters.
Normal items were purchased during the first 2 maps, but quickly became obsolete. Magic items replaced them and — with no possibility of buying Unique items — became the main gold-sink. Magic equipment was very expensive, topping off at tens of thousands of gold coins, but well worth it. In a way, Magic items made the economy work by making the shopkeepers a paid-for loot drop.
Unlike most RPGs, the inventory of each seller was generated via an algorithm similar to that of the in-dungeon item spawning. Merchant-items tended to stick around until the player leveled up, allowing a bit of time to gather enough gold to purchase them, and helped alleviate potential issues with consistently getting inappropriate items, i.e., high-dexterity bows for the Sorcerer. The shopkeeps were also notable for having their own personalities and adding a bit of flavour to the economy itself, e.g., they’d only pay 1 gold piece for “cursed” items (Magic equipment with negative special properties), Wirt the Peg-Legged Boy offered just one item at a time but could sell the best equipment in the game, etc.
Overall only 4 Normal and 6 Magic items were bought during my playthrough — compared to 157 consumables — but these accounted for massive 40% of the equipment changes.
The actual amount of gold obtained by picking up coins and selling items can be seen below:
Despite a few sharp peaks and valleys, gold accumulation slowly grew throughout the game. This was true of both the quantities of gold pieces dropped and the prices of sold items. What’s most interesting here is that it felt like my Warrior was constantly gathering huge amounts of gold, but these paled in comparison to how much merchants paid for equipment: I accumulated 60932 gold pieces in the game, but less than 10% of that came from gold drops.
When I completed the game, I had just over 5,000 gold pieces left over in my inventory. This was roughly the same amount of gold as the total number of coins picked up throughout the whole game. Essentially, if no gold drops were ever collected, my Warrior still could’ve afforded most (if not all) of his equipment and consumables.
Gold is by far the most common type of drop in the game, so it’s easy to see how it creates the impression that it’s available in large quantities. Since there are still plenty of useful ways to spend small amounts of money — buying potions or repairing equipment — it never feels pointless either. Ultimately gold drops serve as something of a large filler in Diablo, but it’s done quite subtly and the economy never suffers for it.
Without a doubt, Diablo was very well received. It had an iconic look that spawned countless GeoCities and Tripod sites with Gothic fonts and firewall gifs. It sounded great as well, and its interface made both single-player and multiplayer easy to pick up. The roguelike elements it adopted were also fairly uncommon at the time giving many players a brand new type of experience via large-scale randomization and expertly tuned reward schedules.
Of course various titles tried to copy this template, sometimes wholesale, but never reached the same level of success. Its tempting to say that the reason for this was simply Diablo being more than the sum of its parts, but there’s one more element that might had something to do with it: intelligent design.
Diablo’s entire world, and most specifically its items, were not just randomly generated. The algorithms for creating them were heavily gated, pruning possibilities at each step. While this might sound limiting, it made for a more coherent world with its own atmosphere and ruleset. Instead of finding a “Longsword +1″, the player would receive the “King’s Sword of Haste,” a mighty weapon that — true to its name — increased swing speed and granted a few other bonuses. The item would stay equipped for hours on end, imbuing it with a sense of importance, and conjure images of a long gone dynasty and its skilled blacksmiths. It’d be natural for the player to form an attachment to the weapon, and its “specialness” would only get accentuated by the lack of any bows or shields with the same prefix and suffix. The presence of fixed items and quest would only add credence to the sword’s legacy, blurring the lines between scripted and generated content, and bring forth a tinge of longing and regret when something better came along.
And when Diablo was finally defeated, the player could simply keep going. The quest for better loot never quite ended, which is perhaps why the game is still played and modded to this day.
Loose threads and general vagueness are often poor crutches in storytelling. These aspects tend to be weird for the sake of being weird, or serve as token springboards for potential sequels, or — worse yet — are indicative of the creator(s)’s lack of a narrative plan, e.g., Lost.
Mystery is inherently alluring, though, and it can also have a fulfilling payoff. The Souls games are a good example of that.
Each title begins with a seemingly disconnected CG intro, and proceeds to thrust the player into a crumbling world with barely an explanation. There are no lengthy expositions, conquests retold over animated world maps, extensive flashback sequences, etc. Instead, whatever pieces of narrative the player puts together are entirely optional and widely scattered about.
A tib-bit mentioned in passing by an NPC foreshadows a gruesome battlefield encountered later in the game. Flavour text accompanying an item hints at a long-standing dynasty and its wealth. Parts of defaced statues allude to an outcast regal heir.
There’s not much of a plot to the player-controlled protagonist, but there’s an incredible sense of depth and history to the setting itself. It’s all very cohesive and consistent, and delivered with understated elegance.
That’s something incredibly rare for a brand new series, but the Souls games actually have something of a 20+ year development history.
From Software’s other games such as Eternal Ring, Shadow Tower, Evergrace, Otogi, and King’s Field contain bits of gameplay and ambiance present in the Souls titles: stamina-draining melee attacks, stat-boosting equipment, sporadically dispersed NPCs, non-linear exploration, item durability and crafting, fog-of-war/dynamic lighting, loading screen and item flavour text, highly destructible environments, “soul”-harvesting progression, etc.
All of these previous games experimented with and revised what’s so confidently delivered in Demon’s Souls and Dark Souls, but the series itself also follows in the footsteps of another older title: Wizardry.
Wizardry’s arrival and subsequent popularity in Japan is fairly well documented, and King’s Field, From Software’s inaugural release, is said to have been closely inspired by the Western CRPG. The interesting part is that Wizardry’s success seems to have come in part due to a shoddy localization. The only clear example of this I can find is a Wikipedia entry that mentions Blade Cusinart — a silly nod to Cuisinart food processors — evoking an aura of alien mythology.
I assume the results were similar with subsequent Wizardry titles, which contained even more pop culture references, but it’s hard to find any concrete evidence of how these were interpreted in Japan. Perhaps someone else could shine a light on the subject?
Regardless, it’s still fascinating to think about how a simple misconception could be taken to an extreme. Many of From Software’s titles found a niche audience and followed their own paths instead of borrowing the homogeneous conventions of their peers; what else could we have seen if a misunderstood production memo or marketing bullet-point was left to evolve in a bubble?
In the end the significance of Wizardy’s Japanese localization might be a bit overstated, but its heritage is certainly evident in the Souls games. They’re positively brimming with relics steeped in a strange, foreign history, and greatly contribute to the series’ unique style.
It’s not difficult to find an article these days detailing the troubles facing the Japanese games industry. Things just aren’t as rosy as they used to be, and there’s plenty of finger-pointing as a result: budgets aren’t big enough, the cultural differences are too vast, software design methodologies aren’t properly utilized, the corporate world tends to stifle innovation, there’s a lack of outsourcing, the desire just isn’t there, etc.
While all these claims might be accurate to some extent, they’re high-level issues that no one can fix single-handedly. Instead of moping about them, I thought it might be a bit more constructive to offer some small, pragmatic advise. In a previous post I tried to do this with a certain localization issue, and now I’ll take a look an interface quirk common in many Japanese games: too many confirmation prompts.
As an example, continuing from a saved game in a typical modern title is fairly painless. Quite often a “Continue” option is the default selection on the title screen menu, and clicking it automatically loads the latest save file.
On the other hand, these are the steps required to resume my game of Resident Evil 5, one of the marquee current-gen titles developed in Japan:
- Entering the title screen menu immediately brings up a pop-up asking me to “Wait a moment…” followed by a message stating that there’s no storage device selected. This is accompanied by a “Yes/No” prompt asking me if I’d like to select one.
- Clicking “Yes” brings up the OS browser with the available options: hard-drive/memory card/the cloud. This requires me to scroll to my desired option and click it.
- Once the storage device is selected, a “Storage Device Configured” message appears along with an “OK” prompt that needs to manually dismissed.
- Following the previous prompt, a “Loading content…” message shows up and then a “Load successful.” message replaces it. This is accompanied by yet another “OK” prompt.
- When the title screen menu finally appears, the “PLAY GAME” option is selected by default. Clicking it takes me to the play game menu.
- On the play game menu, the “CONTINUE” option is selected by default. Clicking it takes me to an overview of the last save game.
- The save game overview displays a date stamp, the selected character, and some other miscellaneous info. It is accompanied by an “OK/Back” prompt.
- Clicking “OK” takes me to a network overview screen with various game options such as co-op settings and hit reactions. The default option is “START GAME”, and the screen is accompanied by an “OK/Back” prompt.
- Clicking “OK” takes me to a loading screen that’s quickly replaced by the inventory screen. Here the default option is “Organize” and I need to scroll down and click “Ready” to proceed.
- Clicking “Ready” brings up a confusingly labeled “Exit” confirmation with a “Yes/No” prompt. “Yes” is the default option, and clicking it finally loads my save game.
A large part of Apple’s success is elegantly accommodating for the most common use case. This basically means that an interface caters to the functionality that’s used most often, while the elegance comes from avoiding extraneous options, prompts, and technically-minded messages (and presenting an aesthetically appealing UI, of course).
Looking at Resident Evil 5 through this lens, the above steps could be truncated and/or altered to provide a more streamlined way of loading the latest save game.
- The “Select a storage device?” screen shouldn’t be there. Instead, the game should automatically select a default storage device, or better yet, select all the available storage devices. If none are available, a warning message could be displayed on the title screen without requiring a separate modal popup.
- The OS device-selection pop-up should only appear if the user chooses to manually change the current storage device.
- The “Storage device configured.” message shouldn’t appear. There’s no point in flooding the user with text if everything went OK. These messages should only pop up if there are errors.
- Same as above; there’s no need to display a “Load successful.” message. The transition into the save state should make it obvious that the data was correctly retrieved.
- If a saved game was found, the default options should be “Continue.” This option should immediately load the latest save game from the selected storage device. Optionally, the game could check all the available storage device and automatically load the latest save file in order to avoid any extra management on the player’s part.
- The secondary play game menu isn’t necessary if the “Continue.” option loads the latest save game.
- The save game overview should be removed as it provides non-vital information when trying to load the latest save game. Instead, this data should be presented in the load-game interface where the player browses through multiple save files. Optionally, it could also be shown on the loading screen itself.
- The network settings screen should be removed as well since it provides non-vital options that are configured at the beginning of the campaign. There’s no pressing need to change these every time the game is loaded, and this functionality could still be provided via an in-game menu.
- The inventory screen is also superfluous to loading a save game — the save data should already contain the proper equipment information. Presumably the screen is there so that the player can change their loadout following a game-over, but in that case the inventory-customization screen should only appear following the actual death. Alternatively it could also be accessible in-game from the save-checkpoint.
- The “Ready” confirmation is horribly labeled as it’s an ambiguous descriptor. Is the player exiting the inventory screen, or the actual save game loading process (it’s the first one, but it always makes me stop and think)? The prompt itself is also unnecessary, especially after the nine preceding ones.
Confirmations prompts in particular tend to be quite prevalent in Japanese titles. Of course these can be useful when it’s easy to hit the wrong button and the consequences of doing so are quite drastic, e.g., clicking the “close” button instead of the “maximize” button in a word processor after writing a lengthy, unsaved document. However, it’s rarely difficult to select the proper save-file in a game, and loading the incorrect one tends to waste only a short amount of time.
Despite this, Japanese developers seem paralysed with fear of the user accidentally selecting the wrong option. This only applies to UI-related interfaces, though; there’s no prompts for avoiding an accidental weapon-reload or putting a car into the wrong gear.
The convention also seems to be that “No” should be the default selection. I have no idea why this is the case, except to prevent the user from accidentally skipping through an important choice while blazing through a bunch pop-ups.
If that’s the assumption, then it speaks very poorly of the application flow as a whole. Perhaps the user wouldn’t be so quick to skip through these confirmations if there weren’t so many of them? And perhaps removing non-vital popups and prompts would provide a faster and sleeker way to get to the fun part of the game: the actual gameplay.
Have you ever played a game that you really liked, but certain parts of it disappointed you (for the record, I totally dig Tom Francis’ proposed ending to BioShock)?
Did the lack of knowledge pertaining to the developer’s budget/timeline/goals/etc., stop you from thinking “Why didn’t they do it *this* way?”
If you’re passionate about a particular title, then probably not. And why should it? As the end-user, you ultimately care about your own experience, and a game’s faults might seem all the more painful if seemingly obvious and feasible changes could have eliminated them.
For me, that game is Tokyo Jungle, and here’s what I think would have made it better:
Let’s start with the easy, somewhat less subjective field of UI. I don’t think anyone reading this enjoys manually scrolling through the thousands of words that make up a typical EULA (and sometimes studios don’t even want to write their own). The fact that Tokyo Jungle pops up a EULA every time you start the damn game is infuriating. It shouldn’t be there at all, really, especially since its only online component is a global leaderboard.
The leaderboard is not all that great either. It takes a very long time to load, and it’s retrieved whenever you finish playing Survival mode. Why not do it in a separate thread and let the user move on? Or at least only force this path if the player has gotten a new high score? What makes the delay even more frustrating is that it needs to be endured in order to register all the unlockables of the playthrough. Simply quitting a game does not record any of the collected items, story mode pieces, etc., which should be saved instantly.
Finally, the world map is quite useful, but also somewhat confusing. Its most zoomed-in level is quite small and doesn’t clearly indicate accessible areas. The location-labels are a bit misleading as well since they contain a bar that fills up and an icon inside the right edge of the bar. At first I thought the fill indicated my dominance of the area (how many spots I marked with my animal), while the number of icons represented the amount of food within its borders.
Turns out it’s actually the fill that reflects the quantity of available food, and the icon is just a label for the fill. To make this indicator more intuitive, the icon should be outside the bar on its left side, or alternatively a “food” caption should be displayed within the fill.
Aside from the herbivores’ double-jump and inability to consume other animals, there’s not a lot of mechanical variety between the various types of fauna. Sure, there are statistical differences, but the gameplay is exactly the same. Expect to see crocodiles scaling buildings by jumping from one extruding air conditioner to another. Creating custom gameplay for each animal would’ve been a sizeable undertaking, though, so I’ll give Tokyo Jungle a grudging pass here.
What’s less excusable is the stealth mechanic. For something that’s presented as a large part of the game — especially for those peaceful herbivores — there’s no clear way of telling what is an animal’s zone of awareness. This is exasperated by the fact that many animals spot you while they’re off-screen, especially in lower-left and lower-right corners of the view window due to the perspective of the camera.
The minimap helps to spot these potential threats, but not while it rains, and it’s more of a band-aid solution anyway. A circular outline for each animal’s field of vision would’ve helped, or at least some arrows on the edges of the screen indicating potential dangers. A further aid would be displaying the exact threat-level of each animal, and possibly a countdown timer showing how much longer before it reverts to a neutral state.
Toxicity can also be problematic to detect. Hiding inside of buildings or underneath bridges doesn’t seem to help when it’s raining, and contaminated food is hard to detect due to the very subtle purple visual that can blend in with the background. Simple icon indicators similar to the alert exclamations could have easily removed this ambiguity.
Surival vs. Story
Despite the annoyances mentioned above, Tokyo Jungle’s biggest failing is in how it handles its Survival and Story modes.
Tokyo Jungle was originally a retail game, and it’s painfully obvious that it was modified to fit a price tag. Story mode — the main campaign — consists of 14 short missions, and each one needs to be individually unlocked by grinding it out in Survival mode.
I suppose this approach greatly extends the overall playtime, but it’s quite frustrating to progress through the narrative one small step at a time after jumping through some hoops in a completely separate game mode. This is doubly perplexing as unlocking the story missions often involves a certain knowledge of the game’s mechanics, but those same mechanics are then explained in the unlocked missions. The whole arrangement reeks of a production change that was implemented late into the game’s development.
The story missions could use a few more checkpoints as well, but they’re quite fun as they contain lots of silly and amusing sequences that slowly unravel the game’s mystery: what happened to all the humans? It’s a neat premise, and it shouldn’t be so heavily gated (especially if it was a questionable way to justify the price since the game was released as an inexpensive downloadable title outside of Japan).
Instead, Story mode should be featured first and foremost, and the animals played/fought during its missions should then get unlocked in Survival mode.
Survival mode itself is an even bigger mess.
Its main goal is to live for 100 years and complete various side missions to get as high a score as possible. In order to provide variety and ensure that players get different scores, Survival mode employs randomization and high-threat events/encounters common to roguelikes. The problem is, all these gameplay systems conflict with each other.
Hunger is greatly boosted in comparison to Story mode (it takes 90-120 seconds to die of starvation) and the missions are on a strict time limit. This means you are constantly on the run if you hope to get a high score, which also doubles as the currency for unlocking new animals. Completing the side missions awards statistical bonuses and unlocks new costumes as well, providing further incentives to rush through the game.
This approach completely invalidates the stealth mechanic, makes exploration of the cool urban environment impractical, and prevents the player from messing around with fun, emergent events such as battle royales of bears fighting chickens fighting giraffes. The random toxic rains and food shortages add further frustration as they can make some of the side missions virtually impossible to complete.
A better approach would’ve been to tone down the unreasonable hunger meter and remove any other time pressures. Next, the randomization could be more prevalent, starting off each playthrough in a different area with a different mission set. New objectives could come in as old ones are completed, and the resulting pace would let players get comfortable with the game and experiment with its most fun components.
If this led to seemingly infinite playtimes, the randomization could be skewed to provide a gradually increasing challenge. Better yet, the statistically-boosted animals of other players could enter the gameworld as AI-controlled bosses to help crown the real king of the hill. Finally, new animals and costumes not present in Story mode could still be used as prizes for playing through Survival mode.
Agree? Disagree? Have any other examples of a game where certain design choices seemed downright baffling? If so, feel free to leave a comment!
- What Kratos Taught Me About Combat Encounters – Some very practical tips on combat from one of God of War’s designers.
- Hotel Dusk: Invisible Causality – I never had a name for it, but the term “Invisible Causality” seems fairly appropriate when describing one of the more infuriating aspects common to adventure games.
- Designing Better Levels Through Human Survival Instincts – Narrow, Intimate, and Prospect spaces, and how such concepts relate to level design.