SECURITY RISKS AND HOW THEY AFFECT US

Security issues that affect us on slime herder and Incapacitor

for slime herder, we keep track of information using unity analytics. We keep track of how many people have attempted a purchase of our game, and when people close our game, so we know what the last level they played was. this is very innocent data, and we are not storing the information, that is handeled by unity itself. However, it may have been useful to keep track of where our players are located, and when they are playing our game, to help identify better marketting practices. If we were to store information like that ourselves we would be opening ourselves up to serious legal issues. If we kept this information on a server that was hacked, and that information was released, we would be liable for that, we could be put in jail or fined hundreds of thousands of dollars.
As we are just students, this is not something a project like this could afford, so we made the descision to keep very minimal information as a saftey measure.

 

Recently there was a large data breach with the Ashley Madison data servers. Approximately 36 million users account information were compromised in an attack, and investigations showed this was possible due to several flaws in the companies security management systems and procedures. There was little documentation on their policies and procuders, a lack of resourceing and management of the security process, assesments of privacy threats, and there were no assesments of the security process to see if it was still fit for purpose.

“According to the findings, ALM’s security framework lacked the following elements: documented information security policies or practices, as a cornerstone of fostering a privacy and security aware culture including appropriate training, resourcing and management focus; an explicit risk management process – including periodic and pro-active assessments of privacy threats, and evaluations of security practices to ensure ALM’s security arrangements were, and remained, fit for purpose.

Findings also revealed ALM lacked adequate training to ensure all staff (including senior management) were aware of, and properly carried out, their privacy and security obligations appropriate to their role and the nature of ALM’s business.

It concluded the company did not take reasonable steps in the circumstances to protect the personal information it held under the Australian Privacy Ac”

CIO

This is an issue that is being dealt with internationally, here in Australia and Canada. Some of the simple things that they failed to do was multiple point authentication and appropriate password management. that means they only had one login for something like a server, and most likely used poor passwords or reused passwords elsewhere that themselves may become comprimised. This means not just that some people may figure out that you have been sleeping around, but it also means that there is a lot more information of passwords and seeded passwords available now, meaning that password cracking software is a fair bit stronger now.

Remember, that if you are keeping any information from your users, including just usernames and passwords, you need to effectively protect that information. Even small breaches can have very large concequenses to you or your company.

Advertisements

DEGAUSS SHADER MATHS & CONCEPT

I did up a degauss unity image effect for Incapacitor. During one of our meetings, we were talking about the feedback we would use for the player taking damage, as they are a robot we would not be able to use the default CoD blood screen. I have played SOMA, as you may have read about in previous posts, and *spoiler* the player is a robot. The damage feedback for the player in that is a glitchy, degauss-like visual interruption, with lots of chromatic aberration. I showed the team what the old CRT monitors looked like when they were degaussed, (example), and the team was very interested in having something much like it in our game.

I used a base Image effect shader by Steve Halliway (here) to skip the setup process, and incorporate the built in unity image effects for chromatic aberration and rendered vortex to help add to the overall look. I have changed the vortex effect so that it renders pixels from outside the source image as black, so the default one won’t look quite right.

The main thing that I did was run each of the colour channels through a sine wave, at different offsets, after offsetting the current reference pixel position by a base sine wave. This is done in the shader, and the timing for the sine is sent through the image effect script. This image effect script is used to allow easy adjustment or dynamically controlled strength of the effect itself. I split up the controls for the wobble strength and effect time, colour strength, chromatic aberration and vortex twist strength. This will be able to work for small to large damage effects.

Currently it isn’t doing a rotational wobble very well, my next goal is to add a stronger twist that over-corrects, and has a stronger twist the further from the centre of the screen it is (the unity vortex works in the opposite direction). I have also been asked to add a ‘static’ effect. I need a better description of what is required before I start designing for it, but that will also be put in.

RAYTRACER & OPTIMIZATIONS

A couple months ago, I started work on a project of optimizing a ray tracing program that was in an awful state. It started off at taking about 180,000ms to complete. It had no optimization at all, and would bounce 7 times, even if a surface was not really reflective.

I originally planned to do things like multi-threading, assumption of pixels, pixel skipping, replacing of more expensive mathematic functions with faster ones, replacing maths libraries and even using the graphics card to render.
Below are the steps that I took to reduce that render time down to ~5 seconds.

OPTIMIZATION STEPS:
step 0: added omp parallel optimization to main loop in main thread

step 1: lowered resolution, reasonable difference

step 2: changed RENDERABLES to SPHERE, to avoid using the virtual call. minimal change (113721ms)

current benchmark: (single line updates) 149097ms (cores @ 90%, no shadows, resolution 512 )) *THIS RESOLUTION WAS USED FOR ALMOST WHOLE OPTIMIZATION PROCESS)

step 3: set OMP parallelization to primary loop, instead of parralel loop. this removed ability to hit ESC though. CPU cores can finish early and are not handed out new tasks yet. (113535ms)

step 4: set primary loop to split main task into 8 (for 8 cores) and set OMP parallelization to split those 8 tasks. This should Load Balance the tasks. currently stops rendering after the first 8/16 lines though (copy paste issue)
(77971ms)
with full picture rendered: (>11000ms)
may be having issues with constantly creating and deleting new threads

This did not work properly because I am not using this correctly. Each of the threads are set to work up until a barrier (their end point in the loop) and what I want is for them to start getting work from the incomplete threads. there needs to be a task pool for them to work in.

step 5: properly set up dynamic task pool, (101518ms) & (103121ms)

step 6: set ray bounces to 1 instead of 4 (~79000ms)

step 7: project settings optimization (71766ms)

step 8: recursive limit set to 0 (36350ms)

step 9: removed ambient from final scene calc ( 36434ms)

step 10: removed reflection calculation from scene (36533ms)

step 11: set up scene octree (15991ms)

step 12: set octree to depth 5, max 10 (612ms)

step 13: shadows on, full resolution (12350ms)

step 14: max depth 10, max spheres 50 (5959ms)

step 15: set progressive to 20 (instead of 1) ( 1 is 8365ms, 20 is 6387ms)

step 16: skip every 2nd row of pixels (looks awful) (3317ms)

The final render test completed at about 5 seconds. The main optimization techniques that were useful were multi-threading, removing un-needed steps, like reflection bounces in a scene wi=thout reflective surfaces, setting up an octree and a dynamic task pool.

REAL WORLD LIMITATIONS

 

Our minimum viable product for the game slime herder was the Samsung Galaxy Tab 3 Lite tablets that we have available for testing here at SAE. These have very limited resources available, like a low CPU clock speed, only dual cores and only 1GB of ram. This means that we cannot have any high-cost parts to our game. One large concern was the jelly-physics that we added to the slimes themselves that would affect each vert in the mesh of the object, and have them slightly slosh around, giving them the feel of a slightly fluid character. This could have caused an issue with the large amount of mesh transforming that this involved in every frame, and would heavily affect the framerate on a lower-end device.  To avoid this being an issue, we found someone’s solution for the jelly physics that had a relatively low cost to run, and stress tested it in the build environment and testing device as early as possible.

TECH SPECS for our test device

Processor

  • CPU Speed
    1.2 GHz
  • CPU Type
    Dual Core

Display

  • Size (Main Display)
    7.0″ (178.0 mm)
  • Resolution (Main Display)
    WSVGA (1024×600, 169PPI)
  • Technology (Main Display)
    TFT
  • Color Depth (Main Display)
    16M
  • S Pen Support
    No

Memory

  • RAM Size (GB)
    RAM 1GB, Storage 8GB*
  • ROM Size (GB)
    8 GB
  • External Memory Support
    MicroSD (Up to 32 GB)

OS

  • OS
    Android 4.2/4.4

Sensors

  • Sensors
    Accelerometer

Audio

  • Audio Playing Format
    MP3,M4A,3GA,AAC,OGG,OGA,WAV,WMA,AMR,AWB,FLAC,MID,MIDI,XMF,MXMF,IMY,RTTTL,RTX,OTA

    Our game needed to go through multiple hurdles to play correctly on the test devices alongside as many other devices as possible. The main things we did was reduce the play area size so visibility was not an issue due to small screens. We also had to hard code the aspect ratios, anchoring and camera positions to deal with screen ratios of 16:9 and 16:10, as not all phones and tablets are the same.
    We also needed to deal with the minimal resources and the large amount of functions running and limit the amount of checks that we are doing per frame. This meant putting a soft cap on the amount of slimes active at any point in time, and building messages to send off and trigger, rather than check to see if something is true every frame, where possible. Due to there being little to no 3D elements or light particles, we are able to save heavily on rendering cycles, something that is usually dealt with using integrated graphics in mobile devices, leaving more resources for simple sprite animations and particle effects.

Self-reflection​

 

How I did as a person

As a team member this tri, I did pretty well. I made sure that I was available to work at any time, as much as I could, and employed the most optimal solutions possible, to save as much time working as I could. I did spend too much time on myself, though, and while both of my major projects have been successful, i did not put any time into side projects like i should have been doing, nor doing blogs, and for this, my grades have suffered.

While I am very loud, I have behaved reasonably appropriately, done my best to network, and correctly managed the time that I did spend working, not wasting it on less useful systems. I also organised several days of working together as a group with my cohort to ensure that as many of us as possible were able to get the more difficult LO’s done and completed, which was reasonably successful for those who were able to turn up. I have also helped some of my classmates outside of class with their work, helping them to understand it or giving them a point of reference for them to work from

How I did as a programmer

The two main projects I worked on went in different directions. for slime herder, working with another programmer, we did not write up a TDD, and only basically planned out what we needed to do before beginning the programming. The resulting code was a ‘game controller’ script with ~50 functions in it, awful naming conventions and little to no notes explaining what anything actually did. I felt a little too rushed to get the job done which led to this, along with no time (allocated by myself) to actually go back and fix things up before they got too complex and intertwined.
I also worked on the level generation for Incapacitor. I had time to plan out how this would work, and how it would meet the design requirements, and it went through a complete rebuild, meaning it was able to be designed even better. I was able to abstract it as much as possible, pulling out almost everything it does and putting that into its own function. I have put notes in for nearly every function & statement, and I feel very confident that if I need to go in anywhere and change how the generator works, I will be able to.

This second project is not how I always work, but was a goal for my self-improvement from last tri, and proves that I am capable of it. In future, I will endeavour to make this the default way that I work.

ns Markov Chains Markov Chainsov Chains Markov Chains

Markov first studied the stochastic processes that came to be named after him in 1906. Approximately a century later coupling played important roles.
Alon and Milman, Jerrum and Sinclair, and Lawler and Sokal elucidated the stochastic processes. Deep connections were found using probabilistic techniques, such as coupling, played a key role in sampling purposes, but also as models on finite grids.
The mixing time can determining the asymptotics of convergence? First of all, it is a lively and central part of modern probability and linear algebraic combinatorics and represent
these exciting developments in an access playground, where: $n=1,2\cdots +a_{n}\lambda _{3}+cdots (mathbf {QP} )=pi _{i}\pi _{j}p_{jj}^{(n)}=\Pr(X_{n+1}=j\mid X_{n-1},X_{2}+\cdots +a_{n}\left(begin){bmatrix} text state space adds one in an irreducibility {displaystyle \Pr(X_{n}=j\mid X_{0}=i)>0

The LZMA lossless follows a continuous-time rate of a state matrix weightings, which is closed look quite number N such examples of Markov change depends of Markov chain idea of the same job that this selects own unique state (in adding classes and Ladders and suburbs. Independent of queueing Q from both sidering a fragment state i if the periods high composition and 0n,n is in simulation to rural recurrence time-switching to state to station), speech results, if it states in C but j it is, the grapes, chemistry reached. For exactly the ratings to remove the large and “Hi Ho! Cherry-O”, for all analogous to relatively, economics to model of enzyme (E) binds only if

y(k+1)=A\cdots +a_{n}=j\mid X_{n}), if the Markov chains in an at a given week 90% of then though, will game after each note or previous state is a simplex.

Time-homogeneous Markov process with a bit more compatible states: living if and only interest (MCST), a measurables X1, X2, X3, …} time n + 3 the distribution; for some books can auxiliary point in this last equation probability:
{displaymath} P = left(x^{(n+1)}P\right) end{displaymath}X = left(begin{displaystyle {boldsymbol {pi }}}) such the Jordan norm equal to 1 and that the state j at some position, not what it is not accessible steady-state described by Google uses of transition matrix 0.625&0.5\end{bmatrix} above for the situation, this article: Example, if a baseball analyse web linear 40% of the city can be ergodicity

A state j is appearing external links, but j if it has an equivalence equal to 1. So if at time. Likewise, “S” states with variables statistical probability of going future state j (written it has and components in the local balance equation for this is not defined balance with x from the transition and also that reversible. A common in Monte Carlo method define {displaystyle {begin{bmatrix}0&1&0\end{pmatrix}

A states and play change introducing model of James. A station an arbitrary state has a normalizing class is stagnant, since: {displaymath}.

\begin{pmatrix. Here 0 < db,dy,dm,do <1 denote this sense is using all values of k time n+1. Howeversible rtex r, walking from v for a further random amount of time T2, ending at some new state v then map every state that was visited during that phase to the problem is possible, but in fact a solution was found by Asmussen, Glynn, and Thorisson (1992).
However, their algorithm that lets one sample from this distribution π.
Our goal is to define suitable random map takes more time the walk terminates does not depend on where one is or how one got there. This ensures that the transitions.
It might seem that, under this stipulation, no solution to the problem is possible to do better in the active setting, but no good lower bounds are currently known for this case.here the transition probabilities from a given vertex are proportional to the weights of the associated arcs (as in the preceding section). We denote the vertex set of G by Ω, and denote the vertex set of G by Ω, and denote the stationary distribution analytically!
In the case of the systems studied in Section 22.3.
What actually works is a multi-phase scheme of the following sort: start at
some vertex set of G by Ω, and denote the vertex r and take a random walk itself to estimate its own cover time for the walk should be neither so short that only a few states get visited by the time for the random walk itself to estimate its
own cover time, one gets an algorithm that lets one sample from this distribution π of a
general Markov chain as a biased random time to a new state v ′′, and map every hitherto-unvisited state that was visited during that walk to v.
In the second phase but not the first to v. In the third phase, walk from v for a random
walk on some directed graph G whose arcs are large. We now consider what happens if the state space is small but the analytic method of simulation is coalescent (it maps every state to one particular state v that one has arrived at after T steps. However, v is subject to initialization bias, so this random maps preserves π.
Second, the time for the random walk. Propp and Wilson (1998) show that by using the random maps from Ω to itself by starting at some fixed vertex r, walking randomly for some large number T of steps, and mapping all states in Ω to the particular state).
There are two constraints that our random durations T1, T2, . . . must satisfy
if we are planning to use this scheme for CFTP. (For convenience we will assume
henceforth that the transition probabilities of the walk, so in this way you have a Markov chain.

Self Reflection on my creative goals

 

Creative goals

My creative goals over the last 3 months have been the following 4 projects:

Killbot tournament

This was the killbot tournament that was laid out to our class at the beginning of the trimester. There were two sections, one focussed on target prediction & accuracy, and one focussed on pathfinding. I came 4th and then 1st, and I am very proud of having done so well.

SuperSprint

This was a side project I pitched near the beginning of the trimester, as a project that was designed to show off an animator’s ability to create models, textures, and animations for use in a game engine. The project would also be set up to allow for as many animators to build for it as were interested, so it became an endless runner, that randomly spawned in sections of road and obstacles, and the animators would only have to create about 7 separate models, one being a skinned character. These would swap all swap out, to show off another animators work. Their goal was to make a model look as good as they could without causing drops in framerate, so a model with a low to medium poly count, but with high quality and detail textures, including maps for the PBR standard shader in unity.

I Remember

In this project, I was 1/2 of the programming team that were ‘contracted’ out to build this game for a team of designers. This was our first experience in having tasks given to us without really understanding how it was meant to all come together, and this project went through many iterations, so we ended up doing our best to make our scripts as modular as possible.

[over]clocked.

This was the CIU project I worked on. Its design was to be an interactive experience describing a possible future incorporating transhumanism and corporate control. I worked on it with and animation student and a film student, we worked out the design as a group and I did the programming for it.

The things I did well

Scope:

I did have plans or desires to work on more things, however, the projects I did work on were completed on my end because they were scoped fairly well, and were easily reduced in size when necessary. I completed my final killbot about a day before it was due, and was able to switch my focus on to my side project, which was scoped with a scalable amount of animators in mind, and very much treated as a side project (read: the first thing to be thrown under the bus). It was still a huge project, being almost half a gig in size from the audio files and models, and there were only a few people who ended up providing work.

 

Design:

The three projects that I took a design lead in went very well. The killbots were pretty simple, but there were three things I implemented that helped me win. The first was the importance and use of capturing data on the stats given to the bot, to identify the most useful stat build for my bot in particular (this was even useful for the bots of others, increasing their win rate by about 50%). I also chose to cover up my lack of perfect accuracy with a burst fire strategy, to increase the chance of hitting my targets, which increased my win rate immensely. I also chose, in the maze, to have my bot only scan in its cardinal directions, greatly increasing my chance to spot a target, these three things combined brought me victory.
I was able to design a decent game out of the need to display many people’s work in Supersprint, with the ability to have a scaleable amount of animators working on the project (including accounting for any that would drop out half way through).
I worked very hard on the design for overclocked, in helping polish up the main ideas we had for the game, helping figure out how to best use every team members abilities effectively and getting the team to get the work done on time, including rescoping the project and redesigning it to fit the theme still, or create an even more meaningful experience out of it.

 

Programming:

The most major things I did right in programming was learning how much I already knew about it, like how to reference and dereference in c++, something i was struggling with when we were first learning about it. I was also doing really well with planning out a project and its programming requirements. While not under stress, I was able to plan out a project with Rohan, and we coded together, making our scripts very accessible to each other, full of notes and easy for each other to use.

The ways I have improved

Seeking help when needed:

This was very important during this tri, as there were a few times when my efforts would have failed had I not sought help. Both Greg and Iain helped me out with some very complex scripting, something that only took them around an hour, would have taken me days to work through, something I didn’t quite have the time for. I was also able to receive help from my peers, like Harry for help with design questions, and Rohan and Chris for audio implementation when I started the endless runner project. Chris and Rohan ended up working on the endless runner with me, which was extremely helpful, I would not have had a hope of getting the project done if I had have had to do the audio system for it as well, not to mention I wouldn’t have been able to implement a solution as elegant as his.

Running a project:

I did run the project, though, and it is complete. I would have liked to have been able to get more work time out of more people, and had more model sets completed for it, but that can’t be helped, the animators especially had a lot on their plates this trimester. To have had so much work done by everyone is impressive all the same. While I have successfully worked on projects together with teams before, this was the first project I completely headed myself, I am certain I have grown better at it.

Working with others in code:

Previously, my code has been difficult to decipher or use by others, it has been very ‘thrown together’. While I’m still not very good at avoiding that, especially while stressed, I am getting much better. I have identified the extreme usefulness of planning out a project, like with the use of a TDD to identify how certain processes and functions will interact with each other.

The things I did poorly

Motivation:

I was lacking in motivation for the entirety of this trimester. There were times where I needed to start work but wouldn’t for the entire day, and this would stretch on for a couple of days. I still worked hard enough to get my work done, the essentials, but I wasted time that could have been used to practice, research or create even better things. If I had of had the endless runner completed earlier, it may have convinced some more of the animators to complete their work.
I often feel very tired for little reason, or get distracted by games or facebook easily, and I need to work on finding my focus quicker.

Research:

I barely did any research for anything that wasn’t immediately essential this trimester. I know I can find any information I need as it arises but I have a problem with just looking up things for fun, in my spare time. This means I’m usually not learning much more than what is presented to me in class, and don’t just rock up with brand new information. I am trying to fix this by getting into projects that would require me to do things I don’t already know about, like displaying video streams in unity, and setting up dynamic audio tools or text display systems (in overclocked and I Remember). I will continue this trend over the next trimesters.

I barely touched shaders:

One of the main topics this trimester was to play with shadertoy and create our own shaders, getting used to using the language, and working around there being no switch statements, or it not being able to hold up and wait on other sections of code to complete. I get the idea of how that works, but I didn’t make any shaders and need to spend some of my own time working on them, especially if I ever plan to be a tech artist.

Identification of a failure in project management:

Once again, this trimester I decided to step back and let the designers design and run the project, and this meant that even though I could see that the project was a rolling dumpster fire from the beginning when it came to project management, I failed to do anything about it to ensure the project got back on track, until it was almost too late to do anything about it. The project suffered heavily for it, and although it was completed, it is a shadow of what it was meant to be.

How to avoid those in the future

Be the project manager:

From here on out, I’m going to try to be the project manager whenever I can be, so I can get as much practice in being one, and to force myself into learning how to best manage people while also taking a lead role in programming for projects. The only time I tend to get anything done is when I’m sure I am out of time to, and when I know that people are relying on me to do it, even to help keep them motivated to do their work. Leading by example is a great way to light the fire under your ass.

 

DED RECKONING, research and implementation

I previously spoke about ded reckoning in an earlier blog while I was working on the killbot for our tournament. Well, I won the tournament for the pathfinding section and came fourth for the target prediction section, but i would like to talk about the research and implementation of the ded reckoning system itself.

 

What is ded reckoning?

ded reckoning (or deduced reckoning) is used in navigation systems and is the process of calculating one’s current position by using a previously determined position, or fix, and advancing that position based upon known or estimated speeds over elapsed time and course.

Dead reckoning can give the best available information on position, but is subject to significant errors due to many factors as both speed and direction must be accurately known at all instants for position to be determined accurately.

This sort of system isnt often used for target prediction, because of the cumulative effect that errors will cause over time, however, if you only need to know where a target will be in a second or less, it can be very cost effective.

This is why it is used for dogfighting games, to display to players where they need to aim to accurately hit a target. Here is what they are displayed as in war thunder.

 

The reason it is needed for this situation, is the bullets have travel time, they will not immediately complete their flight path the second the trigger is pulled, so we must lead the target. However, this can be pretty hard to do, as judging distances from a screen isnt quite what our eyes are designed for. Displaying a simple, predicted flight path target for players to fire at makes the game much more accessable for every player.

What its doing, instead of using hard information to determine current position, it uses assumed information as well. It takes the current speed and direction and acceleration, the distance from the gun, and the bullet speed, and uses that info to decide, based on how long it would take a bullet to get there, how far ahead the player should lead the target to accurately hit them, and displays that position.

 

How did I use it for my killbot?

prediction

As we saw in the last post about my killbots targetting plans, this is what is happening behind the scenes. The killbot gets 2 scans of an enemy in the previous two turns. from this it can assume a speed and direction. (using three scans would have ensured complete accuracy though, which proved to be more useful as that is what won the tournament focussing on prediction). The simplest solution is to then determine how far the target is from my killbot, and how many turns it would take the projectile to travel that distance, and then shoot that many turns of the bots current predicted path ahead of it, however, the bullet’s travel time itself was not being taken into account for the new target. I has assumed that this would be ‘close enough’, but it was throwing my accuracy completely off. I could also not hit anyone who was always moving along a curve, but did not acocunt for this.
The problem with the bullets own travel time was that it kept adding more distance to travel for each turn it needed to travel. This looks a lot like Zeno’s paradox, Achelies and the tortoise:


So to account for this, what we need to do is, using the target prediction that we have, test to see how close the bullet will be to the enemies predicted path, for up to 60 turns, and once we find the closest that the bullet can get, use that angle. This means that the bullets travel time is now taken into acocunt. it still isnt perfect, but by including burst fire into the build, it made my bot a strong contender, coming 4th for the target prediction section.

Below is the block of code i used to converge the bullets path and target path as close as possible.

 for (int i = 1; i < 60; i++)
 {
   tempMagnitude = ((currentPos - input.position) / i) + tempVelocity;

   if (GetDistVec2(tempMagnitude, checkVelocity) < GetDistVec2(bestMagnitude, 
       checkVelocity))
   {
     bestMagnitude = currentPos + (tempMagnitude*(i - 1)*0.1); 
   } 
 }
 tempMagnitude = bestMagnitude;

 

Sexuality and Gender in games

how is this still an issue?

What even is gender?

A person’s sex is what’s in between their legs, while their gender is the characteristics society aligns to the extremes of a spectrum representing masculinity and femininity.

In sociological terms, Ann-Maree Nobelius refers to ‘gender role’ as the characteristics and behaviors that different cultures attribute to the sexes. What it means to be a ‘real man’ in any culture requires male sex plus what our various cultures define as masculine characteristics and behaviors, likewise a ‘real woman’ needs female sex and feminine characteristics. [1]

This is a reasonable description, but is a huge issue in games and in the games industry. Think about the gender roles that are set by the intro to fallout 4. Before you edit your characters, you are a ‘default’ straight white male. After creating your character, you still have to be straight and you have a child. When you leave your house, everyone in your street is in a straight relationship and the plot device to get you to follow the main story is that they took your kid and killed your spouse.
In the previous two titles you edited your character before you got to see them, you weren’t forced to be a straight character at any point in time and ‘I have to save/avenge my family’ wasn’t your primary driving force to get you to follow any of the main story missions.

 

Sexuality and gender (identities) represented in games

Typically, family and relationships have been used as walking plot devices (Gears of war 2) or mechanics (fallout 4). If you don’t already start off with one, then you make your own by gifting a member of the opposite sex (fable) until you can complete their mini-game or side-quest (mass effect). Now that you have a spouse, you gain a bonus to your XP, or even an achievement for sleeping with them (Dragon Age Origins). [2]

There are only a few examples of gay and transgendered characters that have come from the mainstream games industry, such as:

Examples of Gay, Transgender or Cross Dressing Video Game Characters

These are not the best representations of their respective gender or sexuality though, as they are poorly portrayed, one dimensional characters and the only protagonist/main character examples are found in games with a character creator. They are only non-straight by player choice, not as a crucial part of the plot.

Robert Yang cuts the mainstream industry some slack as the climate is so risk-averse that it’s a miracle when they get to make something compelling, much less invest heavily in writing and narrative design. He speaks of the many industry people he has met who have much more radical design opinions than their stakeholders and customers will allow them to express in their work. However, there are just as many who are totally oblivious.

Yang concedes it is difficult to say whether or not games have succeeded in portraying non-heterosexual figures accurately, as the industry is still learning to tell stories with interaction. “Most game characters are still refrigerators with guns,” he laments. [3]

There have been large strides forward for creating multidimensional non-straight or non-binary characters by Bioware, including gay, asexual and transgendered characters and have been praised for treating them respectfully.

 

http://dragonage.wikia.com/wiki/Dorian_Pavus

http://masseffect.wikia.com/wiki/Liara_T%27Soni

http://dragonage.wikia.com/wiki/Cremisius_Aclassi

 

 

What’s the problem though

the problem is that there has been a history of awful representation, if any at all, which makes people feel unwelcome in the community and in the industry (and they really have been). This then contributes to a lack of diversity in the industry, becoming a recursive problem.

There is still an issue in the industry today, take a look at the promotional material for dead island riptide, tastefully named the ‘zombie bait edition’, which shipped as a limited edition in Europe and Australia. [4]

Much more recently though is the incident where a highly paid developer for Oculus crashed a queer developer party at GDC to get free drinks and mansplain how we are “beyond safe spaces”. [5]

 

WHY you should care

We are creators of media content. We have learned that there is immense power in media, enough to define and control cultural values. Even just supporting an ideal or creating a dialogue is enough to make a huge difference.

Tracy L. Dietz states that video game characters have the potential to shape players’ perceptions of gender roles. Through social comparison processes, players learn societal expectations of appearances, behaviors and roles.
Girls may expect that they be dependent victims and that their responsibilities include maintaining beauty and sexual appeal, while boys may determine that their role is to protect and defend women.
Thus, Dietz claims, the roles internalized by the child, including gender, become for the child, and later for the adult, a basis for other roles and for action. The gender roles internalized by young individuals have a significant impact upon their perspectives and the additional roles they assume in later life.
Feminine and masculine symbols are supposed to become a part of a child’s identity. [6]

It is proven that media has an effect on society, helping to form the rules of gender ([7] Gendered Marketing). We know that boys and girls colours used to be swapped, Lego was marketed to girls and the whole family before it was sold to just boys.

 

What you can do

Learn how to identify power structures. Who gains from there being specific gender roles? What about in games? What are the best ways to subvert binary gender roles and hetero sexual values in your medium?

Attempt to create socially responsible media. Don’t rely purely on tropes and binary genders. Most attempts in games to do this are only lacking thoroughly fleshed out character designs and games that do incorporate this like The Last of Us have performed exceedingly well.

Help promote and support diversity in your industry, by helping to create and respect safe spaces. Stand up against poor behavior and views against sexuality and gender.

Do your best to be a decent person and treat everyone equally.

Thankyou for your time.

 

 

 

 

 

 

 

 

 

references

[1] http://www.med.monash.edu.au/gendermed/sexandgender.html

[2] http://www.critical-distance.com/2009/05/05/achievement-unlock-sex/

[3] http://www.1up.com/features/robert-yang-translating-gay-experience

[4] http://au.ign.com/articles/2013/04/24/controversial-dead-island-zombie-bait-edition-ships-anyway

[5] https://unicornbooty.com/some-straight-douchebag-from-oculus-rift-is-behaving-badly-at-the-game-developers-conference/

[6] http://link.springer.com/article/10.1023%2FA%3A1018709905920

[7] https://www.youtube.com/watch?v=3JDmb_f3E2c

never gonna be A*

 

A* is a computer algorithm that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between multiple points, called nodes. It is noted for its high performance and accuracy.

How does it work?

This video tutorial goes through each step of setting up an A* algorithm, which will then need to be implemented into your project (for example, how do you use the list of nodes once it is returned or how do you identify impassable areas of terrain).

I am still working on implementing this pathfinding into my killbot, I have run into a lot of issues and am currently stuck on this:

The path that is returned seems to enjoy moving through walls, or the selected nodes to travel along are spaced very far apart (moving across wall sections as well).

I’m not sure this will be ready for the bot tournament, not unless I go without sleep, but I can’t afford that as I have 2 other projects due this week, and I’m leading a side project.

However, the side project is off to a great start. It is an endless runner designed to show off the capabilities of a whole bunch of animators at once. It is set up to have each team member working as much in parallel to one another as possible, and will have the capability to give credit to each artist as their work comes up, as well as allow control over things like the flow of time and camera positions. This week, I started recruiting animators, there are about 17 working on the project right now. We also chose the main themes that everyone will be working in, as shown here:

 

Those were chosen through voting, from a list of 20 mood boards, each submitted by a member of the animation team. Each member then got to vote for 5 different themes, meaning that everyone voted for their own board and then 4 other boards. These five were the most popular, so everyone got to have their say. Now they each get to choose one of these 5 themes to create from. The whole point of this is so that we can have background music and skyboxes that fit the models specifically (and I don’t have 20 audio guys to do 20 music tracks). I am extremely happy with the work that has been put in just to make up some nice mood boards and I am excited to see the work in its completion!