Curvature masking

During development of Blackout, the first Battle Royale mode in Call of Duty ( you can see the still pretty hype trailer here: https://www.youtube.com/watch?v=BjiaMBk6rHk), we looked at optimizing the memory footprint of our assets while maintaining an acceptable amount of detail.

The competition at the time, namely PUBG, was very weak visually and suffered from a lot of loading and memory issues.

One of the main benefits of scanned assets, especially organic ones such as rocks, come from their general shape – a lot of curves, a lot of angles and a lot of variation. Surface variation that is applied on assets uniformly tend to look fake and CG. We don’t see uniform detail in real life, especially when it comes to rocks that feature a lot of erosion and interaction with nature  – which leads to areas being smoother in certain areas.

In our passes to reduce memory, we examined what the true benefit of keeping those 8K texture sets was, over smaller textures with procedural blending. We noted that for the most part the detail, specifically in the normal map was mostly generic noise – outside of larger forms. A lot of the Treyarch Call of Duty games were built through layered materials and textures already – with procedural noise and material settings to help fine tune and give them more unique / hand authored looks. It made sense to take that approach for the very bespoke scanned assets coming in.

As our artists started building out large maps, our partners at Raven has been working on scanning assets, such as rocks, for use in our BR map – all high quality but even when compressed took over 12mb of memory (which for a 2x1x1 meter rock is a bit much).  Working with our main environment artist, Andy Livingston, we tackled the issue through a number of unique concepts – reducing memory while maintaining or improving the quality of the assets.

We can take a look at MegaScans’ library for good examples of photogrammetry assets that show all the backed in high-frequency detail. Below is the lit and worldNormal view of a typical scanned asset with a 4k texture set (in Unreal):

It looks good, but if we consider the number of rock assets a large map might contain, a 4K normal map adds up very fast. We can tap into the curvature of the normal map via length(pixelNormal.xy). You can read a little more about it here: https://thebookofshaders.com/glossary/?search=length

We find that we get a lot of variance that we can use to mask in detail normals on top of the original asset. Since we care only about the large forms and the general noise – reducing the normal resolution works in our favor (we get a blurry mask, and we reduce the base normal in one go).

We see below, that without tweaking we get a lot of large details from the base normal that works perfectly fine as a blend mask.

Having a detail normal from scans comes in handy here – but we also found that general noise worked really, really well. After all, this is micro-detail of a rough surface. For this example, we look at the asset with a base normal at 512x and a 512x detail normal tiled on-top.

With a bit of tweaking, it’s hard for the player to know which was the original asset, as the curvature provides a good way to creating procedural organic masks. For an additional mask, length(worldNormal.xyz)works especially well to capture form changes in the geometry.

Below we can see the blended result (left) and the original asset (right).

There are some noticeable changes between the two – but in the grand scheme of things they are arguably fairly minor.

If the player looks at these assets side by side, they’d be hard pressed to tell which one is the original. But for the game, one uses a 4K normal map – the other uses two 512x textures for a considerable memory reduction. Under magnification, differences can be spotted due to the lack of detail in the base normal, but under normal circumstances detail and quality has been preserved.

Generally speaking, any rough material , which has a lot of high-frequency noise such as concrete, asphalt, tree bark can go through such a reduction that doesn’t impact the final look of the game much.

While a basic application of detail normals works and is used in most games, it fails to produce the organic weather that assets typically exhibit. Adding a tiny bit of logic to blending helps produce more natural assets without wasting artist time or memory in authoring masks.

BRDFs as style choices – why it matters for art direction

I can barely tell the difference’ was the response I got when demo’ing the change between Oren-Nayar over a standard lambert diffuse. While true, the difference can be seen as minor, it makes a bigger overall impact and also adheres to roughness/gloss.

Oren-Nayar going from 0 to 0.3 roughness. At 0 roughness, we essentially achieve the same look as a lambert.

So for the lead artist or art director who is not familiar with a BRDF (bidirectional reflectance distribution function)– what is it, and why should you care?

In layman terms, a BRDF is a function that calculates how lighting is distributed on a surface. Some are developed to replicate certain surface, like Minnaert and is commonly used to mimic cloth-like surfaces such as velvet (although it was originally intended for the moon’s surface).

For starters, the shading model of a game can have drastic effects on how both the end look is achieved, and the assets that need to be created look like. Below we can see a screenshot of The Legend of Korra by Platinum Games. It blends banded characters on hand painted backgrounds. This cel banded look tends to the most noticeable, and most common deviation from traditional diffuse shading models:

A quick look at part of the texture on the character above, and we can see how simplistic the color information is. Here, the reliance is on both the lighting the mesh geometry to create the extra detail. There is, of course, no particular reason why you could not have the lighting and banding be affected by both the values of the texture map and the contour of the geometry.

You can mix and match diffuse and specular models (and all games do) with decent result and differences. In Guardians of the Galaxy: a Telltale Series (above), we can see multiple BRDFs in action, working harmoniously to achieve a believable yet stylzed end result – from the hair (Kajiya-Kay), head (Oren-Nayar/GGX) to the sweater (RAD cloth).

Below, we can see a much more advanced diffuse providing cartoon like shading. Even though it is ‘cel-shaded’ like Korra, the addition of specular, and normals break up the typical cel-banding to give it a more unique organic look.

 

Or we can rely on just the art assets to provide the style, and stick a more common set of BRDFs. Life is Strange: Before the Storm and Walking Dead Season 3 do this to good results, and the final image and styles are distinct as the art direction ends up separating the two.

.

As it stands, and from the examples shown above – I hope it is somewhat clearer to artists reading this, that the shading model that your project is using is important in achieving the end result of a look, even while stylized. There are many established BRDFs out there, and starting a dialogue with your engineering and tech art team in pre-production can help get better results, or different results. I mentioned it briefly in a previous blog post, but one of the tools I found to have good success with was Disney’s BRDF viewer – a simple application that people can use to evaluate how light reacts to a surface with handy sliders to control attributes. While not perfect in terms of usability, it does provide a decent option to compare and evaluate what might work for a project.

Something that I’ve struggled to show art directors and artists in the past was showing what the perceived changes on a test asset would look like, and side by side comparisons. Most artists are not aware of the differences or simply do not care – but they should as the smallest change on an overall character or scene can shift the look quite a bit. Perhaps look development, a practice often associated with films and animation, should be considered more heavily by games.

One thing to keep in mind – there is a solid ground truth that most real-time shading models will attempt to achieve. However, due to runtime performance, approximations are made and performance vs accuracy trade-offs are decided upon. To assume that one viewer, say Marmoset or Sketchfab, is the ‘right’ one is naive and the final result should always, always be looked at in your studio’s or project’s engine of choice. Content made for the game does not live in a viewer for an artist’s portfolio but rather in the game itself.

Special Thanks for Matt Davidson, who helped introduce and show me the importance of BRDFs. The look and feel of many of Telltale's newer projects would not have been possible without his work and addition of so many high quality shading models

Telltale’s move to PBS : shifting technologies and practices.

While I was at Telltale, we had an internal agenda to upgrade the ‘fidelity’ of our projects moving forward (starting with Batman) so as our games would not be perceived as ‘outdated’. On paper this sounds like a simple initiative, but it was no small feat as Telltale has very compact development schedules, which make such transitions sensitive and difficult to achieve cleanly. Coupled with the deadlines, teams needed to work with the approach of  “author once, run anywhere” in terms of multi-platform development. This essentially meant that content creation was to be done once and the engine would handle deployment to all platforms, which at this point in time stretched from mobile platforms to current generation consoles (Xbox One / PlayStation 4).

The majority of art on Telltale projects were heavily reliant on hand painted maps with lighting heavily painted in. Most environment assets had no normal maps, and most characters had painted lighting since most of the time the light rigs used for characters did not have specular (only a few lights per scene would support per pixel specular depending on the platform).  Most environmental lighting was baked in via Maya and mental ray. If you’re reading this and familiar with the process – it is likely you were involved with game development in the late PS2 / early 360 era where lightmaps were starting to be baked to textures (a massive improvement over slicing geometry and baking to vertices).

Since Telltale’s projects were so stylized, and because the overheard on this old school lighting technique was so low, the end result was effective and played well to the aesthetic of each project. The entire pipeline Telltale had setup was built around this concept, from internal Maya tools to a proprietary engine that showed artists what they could expect on platforms (and there were quite a few platforms that games shipped to).

Workflows were also adapted around this – artists in particular were encouraged to re-use texture maps and apply materials without re-work to help increase their speed to match deadlines for episodes. This worked very well, and Telltale’s projects with an absurd amount of new content were released at speeds unmatched by other studios.

However, the style did start to show its age, especially with the turn of current generation consoles and other projects showcasing better shading / lighting were not helping in showing the ‘age’ of Telltale’s games. Decisions were made and it was decided that Batman would be the start of a new push towards improved visual fidelity. Right before the initial shift with Batman, the engine had been updated in the background to support DX11 for Walking Dead: Michonne, but was still using the same shading model that had been set for quite a while – a lambertian diffuse model with blinn-phong as specular (where applicable for characters).

Walking Dead: Michonne. Hand painted diffuse maps on characters typically included heavily painted lighting information, such as sharp and hot specular highlights on noses and lips.

If you looked closely at Michonne’s water and some of the foliage in the first scene, you’d would be seeing the first use of the material system – a flow map was created and used for the water, which was a prominent theme in the game and there was some subtle animation for the foliage to simulate wind.

While only a few days in, the system proved to be fairly powerful, and it’s ability to generate compiled shaders for use in Maya’s DirectX11 plugin was immensely useful as artists would be able to preview their assets without exporting to the game engine, allowing them to quickly setup more complex materials – such as texture and material blending.

The start of a very fast and rocky trip.

If you’ve attended GDC – you’d have noticed a few years where a lot of studios began talking about their transitions to a physically based model. And it indeed is a transition – artists can not be expected to turn on a dime and change their way of thinking.

Unfortunately, the timeframe for releasing Batman was not the typical 2-3 years other studios have, it was instead a shortened several months (with an E3 demo preceding release). Over the course of a few months the engine’s renderer was overhauled with new graphical features:

  • volumetric fog support
  • an entirely new diffuse model that supported advanced cell banding and color blending
  • more advanced mesh outlines
  • adoption of a phsyically based specular model (GGX)
  • entirely overhauled lighting
  • improved postprocessing.

With such a rapid turnaround time and implementation, there was not much time for the art department to change their processes. This ended up in less than ideal optimization for their scenes and characters that led to performance issues on initial release. While rectified fairly quickly (in several weeks), it’s impact was noticed and amplifies one of the issues of such a fairly compact release timeframe.

The DCC Funnel

If you’ve worked with a traditional game pipeline, there is typically a very fixed DCC path. Art is created in Maya or 3ds Max. There are a lot of scripts and  a proprietary format is involved while  the entire process is bottlenecked through  what is essentially an export button that spits out art in a format with metadata the engine can read and process. This is similarly true for commercial engines such as Unreal or Unity, although they adopted more available formats such as FBX (Unreal used to look for .ase, .asm and .asx before FBX took over) which allows a wider range of DCC programs to be used.

Telltale relies heavily on their use of Maya for asset creation, scene setup and import. This means a lot of data is handled inside of Maya, and was therefore useful to aggregate data. Tools were easy to setup to help with setting up materials and exporting, but also ended up a double edged sword as the heavier the toolset became the slower Maya would be. Limitations from Maya also came in as a surprise in the least expected places. The studio’s reliance on Mental Ray, for instance, was an issue early in production as Mental Ray does not recognize DirectX 11 shaders and would treat them as default grey lamberts. The resulting lightmaps generated would be devoid of color bounce, which was used to great effect by the environment art team in previous projects. While a proprietary light baking system, and later on Enlighten, was introduced – it provided a headache to the art team who had to deal without while said systems were being developed.

Adopting GGX / PBS took more time than anticipated

A major miscalculation was the assumption people would take to a new specular model easily. Infact, most of the art assets previously had not been created with any specular in mind – all lighting was baked onto a large chunk of assets. Characters had dynamic lights (at the time, upto 4 per) but specularity was rarely used and art often opted to paint highlights in heavily. Some artists took a longer time to grasp that a matte surface still receives specularity, just more spread out – giving the appearance of their diffuse being ‘washed’ out.

One important tool to help art direction understand the longer ‘tail’ that GGX has was Disney’s BRDF viewer, which provided a slider and ability to compare BRDFs. A custom gif was created as well that showed gloss transitioning from 0.0001 to 0.99, which went a long way of showing the curve it respects.

Physically based shading was also a hot topic for a while, and led to some interesting results. While we ended up sorting most issues through the heavy reliance on Substance Painter / Designer, some initial results ended up in poor results that were blamed on the shading being incorrect, or ‘bugs’. The initial teaser for Walking Dead 3 showcased a zombie with very muted specular, as the artists had a bright blue specular map being used, instead of a more physically correct specular value (closer to 0.02) for non-dielectrics. The end result was a quick hack that muted and dulled all specular output for this character, and ended up looking fairly flat.

An incorrect specular albedo was painted for this zombie, and while the final output was muted significantly we can still a blue hue on the skin

 

Automation and tools saved the day.

I worked on three major scripts / tools during my time at Telltale, all were built to support various stages of production and to address critical issues facing the studio and projects.

While Batman had a set of assets built to support the newer BRDFs from the getgo, Walking Dead 3 had a much longer tail, and had started production earlier – with the assumption that they would initially be using the older materials and shading models the studio had used before. This meant that a few days before the vertical slice was due, most assets had only a diffuse map painted before the decision to use GGX for the specular BRDF was made.

How would the art team (at this point fairly small) manage to re-do their assets in a few days?

The short answer was they didn’t.

Thanks to the way the diffuse painting and guidelines were setup, it was fairly trivial to separate out materials and generate color ID maps from the diffuse PSDs.

Above: A color ID map and its corresponding albedo map. Linework was generally stored on a seperate texture so the shader could nullify specular correctly.

From there, if the naming was setup correctly, artists would simply launch a python tool that utilized Substance’s Batch Tools, generate physically correct specular and gloss maps, along with auto-correcting the albedo maps (especially if the material was metallic), and then auto-assign correct shaders with the maps plugged in and ready for export. Below we can see the very first assets I used to generate these maps while testing the Substance Designer graph and tool I wrote:

Assets above were created by Dan Wallace and Aasim Zubair. One thing to note is that due to the stylized nature of Telltale’s projects, the comic linework did not have much or any specular response.

 While the script took a weekend to write and test, it ended up creating a vertical slice so successful, it was used as the showcase trailer for Walking Dead Season 3, and the tool saw use for the rest of the project and as a way to have outsourcing generate correct gloss/specular maps with little oversight. To view this trailer, you can visit https://www.youtube.com/watch?v=m3M5mlkvk9w

The script was also adjusted to support multiple projects, such as the in-progress Batman and the upcoming Guardians of the Galaxy – although by that time the art team had adjusted well to the physically based shading model and relied more heavily on Substance Designer / Painter for their assets.

def substanceRender(self,diffuse,currentProject):

		location = locations()
		sbsRender = location.substanceDirectory()[2]

		if currentProject == str(location.wd3Project()):
			sbsInput = (location.substanceDirectory()[4] + "graphSpecGloss.sbsar")
		else:
			sbsInput = (location.substanceDirectory()[4] + "batman-graphSpecGloss.sbsar")

		inputDiffuse =  currentProject + diffuse
		
		inputMask = inputDiffuse[:-4] + "_mask.tga"
		


		sbsRenderArg = ('render '+
               '--inputs "{0}" '.format(sbsInput) +
               '--set-entry input_diffuse@{0} '.format(inputDiffuse) +
               '--set-entry input_mask@{0} '.format(inputMask) + 
               
               '--input-graph-output {0} '.format('diffuse') + #specify which outputs to process
               '--input-graph-output {0} '.format('spec') + 
               
               '--output-format "{0}" '.format('tga') + 
               '--output-path {0} '.format(currentProject) +

               '--output-name {0}'.format(inputDiffuse[:-4]) + '_{outputNodeName}' #fileouput
               )
               
		return sbsRender,sbsRenderArg #for debugging purposes

Prior to Batman, there was no concept of memory auditing, or any process that took place. Any process that took place was done at the end of production during build time  – often resulting in last minute texture size reduction done by a member of the build team. While the rendering technology had leaped forward an immense amount during a short period of time, the production methodologies had not.  This was compounded with the lack of tools or processes to help adapt art production to new tech. Imagine not know how ‘large’ an environment was in terms of memory, or even a character. In the past, the art department was told to do make things look good under a short amount of time – which sometimes meant cobbling together textures from various projects and other sources – and slapping a new material onto a single mesh.

 Some assets in older projects may have had upwards of 5 materials or more on them. This didn’t matter much as there was no lighting on these assets. Everything was baked on with lightmaps. With the shift to a new more physical model and renderer, doing so added much more complexity and required a tighter control of art assets. It was simply not good practice to have multiple materials on meshes, and the addition of a specular albedo / gloss and normal texture would result in triple the memory costs. Needless to say, running out of memory was often an issue with Batman and Walking Dead due to lack of visibility and insight of how large scenes were.

 The tipping point came in near the end of Batman’s launch, where we needed to have a good view on what our material settings were. Thanks to the nature of Maya and its DirectX 11 plugin, it was possible to query a Maya scene, every material setting and texture input possible. Since Telltale relies heavily on Google Apps for their mail and documents, it also made sense to also use Google sheets as a way to store this information. Thankfully, Google provides a Python API to access their online app suite, and after a few weeks of late nights, I ended up writing a set of scripts that would do the following on a nightly schedule:

  •  Connect to Google Sheets, and read through various sheets containing a list of characters, environments and objects being used in any one production
  • Launch Maya in offline mode, connect to source control and download those assets
  • Open the file and pull a large amount of information about assets:
    • Unique texture count per scene, Texture dimensions, sizes, channels, triangle counts per asset, total triangle counts per scene, materials per object, total material instances and unique shaders being used
  • Generate a new set of spreadsheets with that information collated in a more readable fashion
  • Email out a log once done
Above: a sample of the JSON file generated via script. Initially used to collate material / shader information the script grew to encompass all relevant data possible

 With this information we were able to move forward with Walking Dead and identify where we were using up our memory. With a budget of roughly 1.2gb per scene we found we were blowing out memory due to the size of our environments in many cases. Unfortunately,  by the time this information had been collected, we were close to shipping Episode 1 of Walking Dead, and few resources were available to re-do assets.

This is where Simplygon came to play. In previous projects, Simplygon was relied on to reduce vertex counts for characters on lower end platforms (Wii, mobile) but not much else. Simplygon provides lincensees the ability to use a plugin built for Maya – where I ended up writing a tool that would allow artists to ‘one-click’ reduce assets in a pragmatic fashion – select a number of assets, choose a new name for the final reduced mesh (and material), and a final resolution. With the help of Simplygon’s plugin and some Python/MEL we would be able to have a smaller asset with a reduced single material, and a single set of normal, spec albedo/gloss and albedo textures.

The result was clear: a near half reduction in total memory costs for some scenes without much loss in quality. Scenes that were crashing the Xbox One were now performant.

This diagnostics tool also allowed the start of memory budgeting – something the studio had not done in the past. The budget changed per title, but allowed a good chunk of memory for environments, and then for characters, followed by UI then VFX last. In some projects, characters gained a larger share of memory over environments, but the end sum allowed scenes to be set up while falling under the set budget. During scene planning meetings, production could pull up the sheets to determine if a planned scene allowed for a feasible amount number of character variety.

Beginners Python help – keeping things simple – use built in features!

A user in the Programmer’s Discussion / Python channel had issues with code he/she was working. In this case the user needed to check a string to see if it passed certain password requirements:

It was a mess of code, with lots of variables and a counter that would get added onto through a number of for / if loops. I’ve removed most of it, but here’s a small sample of what it looked like:

count_one = 0
count_two = 0
count_three = 0
count_four = 0
password = "ssssL"
length_password = len(password)
lower_alphabet = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"]
upper_alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
special_characters = ["$", "#", "@"]
 
if length_password >= 6 and length_password <= 16:
   
    for letter in password:
        for lower_letter in lower_alphabet:
           
           if letter == lower_letter:
               count_one = count_one + 1
 
if count_one == 0:
    print("Missing lowercase letter in password")

A huge mess.

While counters work, it’s much simpler to use booleans. Simply iterating through the characters in a password and setting a boolean to true is enough to verify if the password meets a set of given requirements. Also, removing giant lists like lower_alphabet, upper_alphabet – there are built in lists from the string module … or in the following example, some quick python checks:

password = "ss#ss2s"
upperCaseCheck = lowerCaseCheck = numbersCheck = specialCharCheck = False
special_characters = ["$", "#", "@"]


if len(password) >= 6 and len(password) <=16:

	for chars in password:
		
		if chars in special_characters:	specialCharCheck = True
		if chars.isdigit(): numbersCheck = True
		if chars.isalpha():
			upperCaseCheck = True
			if chars.islower():	lowerCaseCheck = True
				

	if specialCharCheck is False: print(" Fail: Did not find special character in password ")
	if lowerCaseCheck is False: print("Fail: Did not find lower case letter")
	if upperCaseCheck is False: print("Fail: Did not find uppercase letter")
	if numbersCheck is False: print("Fail: Did not find number")


else:
	print (" Fail: Not within defined length ")

Baking information into vertex color

This past year at GDC 2017, there were a few talks on the use of vertex shaders to provide movement. While not a new stunning revelation or paradigm shift, it does mark the beginning of an interesting shift of thinking – that vertex information can be used to store data that’s more than just information for blending textures or tinting the mesh.

But painting that information be fairly boring or error prone since it’s hard to visualize. Here’s a sample Python script that can help paint vertex position in worldSpace / local space from Maya

import pymel.core as pm

obj = pm.ls(sl=True)[0]
vtxPosition = []

for x in obj.vtx:
    worldPos = pm.xform(x, q=True, t=True, ws=True)
    pm.select(x)
    pm.polyColorPerVertex (rgb = (worldPos[0], worldPos[1], worldPos[2]))

So fairly simple – iterate through each vertex in a meshShape, grab it’s position in worldSpace (or omit the ws flag to do it in object space).

We can observe that the vertex information is correctly painted by looking at the Component editor in and selecting a few vertices and simply turning on vertex colors in the viewport. Neat!

You may notice that the color values go into negatives (as expected) and if you work with a large mesh that has vertices beyond translation values of (1,1,1) that the mesh starts getting painted weirdly, but the data is still correct. However, your game engine may look at vertex color information as RGB8 and you’ll need to remap the values to fit.  The following adjustment will now remap based on the bounding box of the mesh  to (0,1) with a fairly basic adjustment to the script.

def remapValue(originalValue, oldMin, oldMax, newMin, newMax):

    oldRange = (oldMax - oldMin)
    newRange = (newMax - newMin)
    newValue = (((originalValue - oldMin) * newRange) / oldRange) + newMin
    return newValue


import pymel.core as pm

obj = pm.ls(sl=True)[0]
boundingBox = pm.polyEvaluate(b=True, ae=True)

vtxPosition = []

for x in obj.vtx[:]:
    originalPos = pm.xform(x, q=True, t=True)
    newPos = [
        remapValue(originalPos[0], boundingBox[0][0], boundingBox[0][1], 0, 1), 
        remapValue(originalPos[1], boundingBox[1][0], boundingBox[1][1], 0, 1), 
        remapValue(originalPos[2], boundingBox[2][0], boundingBox[2][1], 0, 1)]
    pm.select(x)    
    pm.polyColorPerVertex (rgb = (newPos[0], newPos[1], newPos[2]))

This results in a paler looking mesh, but now with better values to work with. One use for this could include painting gradients on tree trunks to anchor them with a black value at the base as a vertex shader is played for animation.

Converting a list to string

A lot of times I need to take a python list and convert it back into a giant string to pass into either MEL or to feed into batch tool via command line – the best example I can think of happens to be Substance Batch Tools which typically looks for custom inputs.

It’s actually fairly easy to do it in a single line:

for item in randomList: stringTemp+= item + " "

That’s it. It’s fairly useful when you need to pass in list of mesh groups to Substance via --input-selection

A good way to capture GIFs

When I look back at all of the documentation I wrote for artists, and a lot of the conversations I’ve had with art leads and directors – one of the most critical ways to communicate changes, or instructions was to provide animated gifs. I’m not a huge fan of video tutorials – they typically are too long for the amount of content they cover and are typically very barebones and hard to use as reference. On the flipside, a long wall of text with dozens of images can also be hard to digest.

A happy middle ground I’ve found has been short text blurbs with animated GIFs. GIFs loop, they can be fairly long and add a lot to the wow factor. They are also much easier to embed onto online wikis like Confluence – infact they are just as easy as images, simply drag and drop or copy/paste into the text.

And thanks to GIFCam – they are just as easy to make (and it’s free!) : http://blog.bahraniapps.com/gifcam/

Just like FRAPs, GifCam is great when capturing openGL and DirectX applications, and was the best way to document and demonstrate material and tool workflows.

 

Python and images

During mid-development of Batman and Walking Dead Season 3,  an artist requested the ability to strip out alpha from textures without having to necessarily open it up in Photoshop. As it turned out, she noticed that Substance Painter had been exporting images with an alpha channel, regardless of whether or not it was using it. While this has been fixed in more recent releases of Painter, I ended up using part of the script to basically get a better understanding of how assets were being generated by the art department. Keeping track of it helped get ‘easy wins’ when it came down to optimizations, as having an alpha channel in a texture that wasn’t being used just took up more memory.

There are a number of image manipulation / operation modules available for Python – but I found PIL ( https://pypi.python.org/pypi/PIL ) to be fairly straight forward to use when analyzing targas – the texture format of choice at Telltale. Below is a snippet of how easy it is to find textures with an alpha channel and strip them out. At Telltale I ended up passing the file-list to a QListWidget and connected a QPushButton to a function to re-save.

for tga in fileList[:]:
			
			print ("Processing {0}").format(tga)
		 	TGA = Image.open(tga)
		 	if TGA.mode == 'RGBA':
		 		print ("Found Alpha channel")
		 		try:
		 			TGA = TGA.convert('RGB')
		 			TGA.save (tga)
		 			print ("Successfully saved without alpha")
		 		except IOError as e:
		 			print ("Couldn't save - please make sure the file is checked out")
		 		finally: 
		 			pass

Looking at this old code I noticed I unnecessarily use fileList[:]  instead of just fileList . I’m not sure why I used to do this, but it’s definitely not something I do anymore.

A full list of image modes can be found on the PIL docs here: http://effbot.org/imagingbook/concepts.htm#mode , but as a tech artist in games you’ll probably run into L, RGB and RGBA the most.

First posts are always the hardest

Many moons ago, I did some contract work for the now non-existent Maxis team over in Emeryville, California (not to be confused with the Maxis team in Redwood Shores that works on the Sims).

We had an issue with a lot of copy and pasted assets in Maya – as it happens that Maya absolutely loves to append ‘pasted__’ into the namespace of transforms, mesh shapes and almost every node that ends up being pasted into the Maya scene. While Maya has the ability to search and replace names on transforms and mesh shapes in the Maya scene, it did not and still does not have the ability to clean up the hypershade nodes – which can lead to fairly unreadable hypershade, as at some point you’ll have scenes with material names similar to ‘pasted__pasted__pasted__blinn3’.

One of the artists made a script that removed ‘pasted__’ from the material names, but unfortunately only removed the first instance. While I never ended up looking at the script at the time, I imagine he simply wrote something to check the first x characters and remove if it matched. This lead artists into having to run a few times to clean up the hypershade.

When I eventually worked at Telltale, I noticed the art team had similar issues, except it was much worse as the production times for each episode of the game were very short, and a lot of copy and pasting was used – either through the standard Ctrl+C/Ctrl+V or through a custom import script.

I ended up writing the below script to catch all instances of a defined set of characters via regex – as I did not want artists to have to click on buttons multiple times.

# Author : Farhan Noor 7/13/2015
# Hypershade Cleanup
import pymel.core as pm
import re


def renameHypershadeNodes(hypRemove, hypReplace):
    
    hypSelection = pm.ls(type='shadingEngine', mat=True)
    print hypSelection
    for shadeNode in hypSelection:
        if re.search(hypRemove, str(shadeNode.name())):
            print (("Renaming %s to %s") % (
                shadeNode.name(), shadeNode.name().replace(hypRemove, hypReplace)))
            shadeNode.rename(shadeNode.name().replace(hypRemove, hypReplace))

renameHypershadeNodes("pasted__", "")

Note that we did not care about the file texture nodes – only the materials themselves as the names were baked into the mesh export, so this doesn’t really look at any other nodes. You can easily adjust the file texture nodes by adjusting to look for type=’ftn’.