SECURITY RISKS AND HOW THEY AFFECT US

Security issues that affect us on slime herder and Incapacitor

for slime herder, we keep track of information using unity analytics. We keep track of how many people have attempted a purchase of our game, and when people close our game, so we know what the last level they played was. this is very innocent data, and we are not storing the information, that is handeled by unity itself. However, it may have been useful to keep track of where our players are located, and when they are playing our game, to help identify better marketting practices. If we were to store information like that ourselves we would be opening ourselves up to serious legal issues. If we kept this information on a server that was hacked, and that information was released, we would be liable for that, we could be put in jail or fined hundreds of thousands of dollars.
As we are just students, this is not something a project like this could afford, so we made the descision to keep very minimal information as a saftey measure.

 

Recently there was a large data breach with the Ashley Madison data servers. Approximately 36 million users account information were compromised in an attack, and investigations showed this was possible due to several flaws in the companies security management systems and procedures. There was little documentation on their policies and procuders, a lack of resourceing and management of the security process, assesments of privacy threats, and there were no assesments of the security process to see if it was still fit for purpose.

“According to the findings, ALM’s security framework lacked the following elements: documented information security policies or practices, as a cornerstone of fostering a privacy and security aware culture including appropriate training, resourcing and management focus; an explicit risk management process – including periodic and pro-active assessments of privacy threats, and evaluations of security practices to ensure ALM’s security arrangements were, and remained, fit for purpose.

Findings also revealed ALM lacked adequate training to ensure all staff (including senior management) were aware of, and properly carried out, their privacy and security obligations appropriate to their role and the nature of ALM’s business.

It concluded the company did not take reasonable steps in the circumstances to protect the personal information it held under the Australian Privacy Ac”

CIO

This is an issue that is being dealt with internationally, here in Australia and Canada. Some of the simple things that they failed to do was multiple point authentication and appropriate password management. that means they only had one login for something like a server, and most likely used poor passwords or reused passwords elsewhere that themselves may become comprimised. This means not just that some people may figure out that you have been sleeping around, but it also means that there is a lot more information of passwords and seeded passwords available now, meaning that password cracking software is a fair bit stronger now.

Remember, that if you are keeping any information from your users, including just usernames and passwords, you need to effectively protect that information. Even small breaches can have very large concequenses to you or your company.

DEGAUSS SHADER MATHS & CONCEPT

I did up a degauss unity image effect for Incapacitor. During one of our meetings, we were talking about the feedback we would use for the player taking damage, as they are a robot we would not be able to use the default CoD blood screen. I have played SOMA, as you may have read about in previous posts, and *spoiler* the player is a robot. The damage feedback for the player in that is a glitchy, degauss-like visual interruption, with lots of chromatic aberration. I showed the team what the old CRT monitors looked like when they were degaussed, (example), and the team was very interested in having something much like it in our game.

I used a base Image effect shader by Steve Halliway (here) to skip the setup process, and incorporate the built in unity image effects for chromatic aberration and rendered vortex to help add to the overall look. I have changed the vortex effect so that it renders pixels from outside the source image as black, so the default one won’t look quite right.

The main thing that I did was run each of the colour channels through a sine wave, at different offsets, after offsetting the current reference pixel position by a base sine wave. This is done in the shader, and the timing for the sine is sent through the image effect script. This image effect script is used to allow easy adjustment or dynamically controlled strength of the effect itself. I split up the controls for the wobble strength and effect time, colour strength, chromatic aberration and vortex twist strength. This will be able to work for small to large damage effects.

Currently it isn’t doing a rotational wobble very well, my next goal is to add a stronger twist that over-corrects, and has a stronger twist the further from the centre of the screen it is (the unity vortex works in the opposite direction). I have also been asked to add a ‘static’ effect. I need a better description of what is required before I start designing for it, but that will also be put in.

RAYTRACER & OPTIMIZATIONS

A couple months ago, I started work on a project of optimizing a ray tracing program that was in an awful state. It started off at taking about 180,000ms to complete. It had no optimization at all, and would bounce 7 times, even if a surface was not really reflective.

I originally planned to do things like multi-threading, assumption of pixels, pixel skipping, replacing of more expensive mathematic functions with faster ones, replacing maths libraries and even using the graphics card to render.
Below are the steps that I took to reduce that render time down to ~5 seconds.

OPTIMIZATION STEPS:
step 0: added omp parallel optimization to main loop in main thread

step 1: lowered resolution, reasonable difference

step 2: changed RENDERABLES to SPHERE, to avoid using the virtual call. minimal change (113721ms)

current benchmark: (single line updates) 149097ms (cores @ 90%, no shadows, resolution 512 )) *THIS RESOLUTION WAS USED FOR ALMOST WHOLE OPTIMIZATION PROCESS)

step 3: set OMP parallelization to primary loop, instead of parralel loop. this removed ability to hit ESC though. CPU cores can finish early and are not handed out new tasks yet. (113535ms)

step 4: set primary loop to split main task into 8 (for 8 cores) and set OMP parallelization to split those 8 tasks. This should Load Balance the tasks. currently stops rendering after the first 8/16 lines though (copy paste issue)
(77971ms)
with full picture rendered: (>11000ms)
may be having issues with constantly creating and deleting new threads

This did not work properly because I am not using this correctly. Each of the threads are set to work up until a barrier (their end point in the loop) and what I want is for them to start getting work from the incomplete threads. there needs to be a task pool for them to work in.

step 5: properly set up dynamic task pool, (101518ms) & (103121ms)

step 6: set ray bounces to 1 instead of 4 (~79000ms)

step 7: project settings optimization (71766ms)

step 8: recursive limit set to 0 (36350ms)

step 9: removed ambient from final scene calc ( 36434ms)

step 10: removed reflection calculation from scene (36533ms)

step 11: set up scene octree (15991ms)

step 12: set octree to depth 5, max 10 (612ms)

step 13: shadows on, full resolution (12350ms)

step 14: max depth 10, max spheres 50 (5959ms)

step 15: set progressive to 20 (instead of 1) ( 1 is 8365ms, 20 is 6387ms)

step 16: skip every 2nd row of pixels (looks awful) (3317ms)

The final render test completed at about 5 seconds. The main optimization techniques that were useful were multi-threading, removing un-needed steps, like reflection bounces in a scene wi=thout reflective surfaces, setting up an octree and a dynamic task pool.

REAL WORLD LIMITATIONS

 

Our minimum viable product for the game slime herder was the Samsung Galaxy Tab 3 Lite tablets that we have available for testing here at SAE. These have very limited resources available, like a low CPU clock speed, only dual cores and only 1GB of ram. This means that we cannot have any high-cost parts to our game. One large concern was the jelly-physics that we added to the slimes themselves that would affect each vert in the mesh of the object, and have them slightly slosh around, giving them the feel of a slightly fluid character. This could have caused an issue with the large amount of mesh transforming that this involved in every frame, and would heavily affect the framerate on a lower-end device.  To avoid this being an issue, we found someone’s solution for the jelly physics that had a relatively low cost to run, and stress tested it in the build environment and testing device as early as possible.

TECH SPECS for our test device

Processor

  • CPU Speed
    1.2 GHz
  • CPU Type
    Dual Core

Display

  • Size (Main Display)
    7.0″ (178.0 mm)
  • Resolution (Main Display)
    WSVGA (1024×600, 169PPI)
  • Technology (Main Display)
    TFT
  • Color Depth (Main Display)
    16M
  • S Pen Support
    No

Memory

  • RAM Size (GB)
    RAM 1GB, Storage 8GB*
  • ROM Size (GB)
    8 GB
  • External Memory Support
    MicroSD (Up to 32 GB)

OS

  • OS
    Android 4.2/4.4

Sensors

  • Sensors
    Accelerometer

Audio

  • Audio Playing Format
    MP3,M4A,3GA,AAC,OGG,OGA,WAV,WMA,AMR,AWB,FLAC,MID,MIDI,XMF,MXMF,IMY,RTTTL,RTX,OTA

    Our game needed to go through multiple hurdles to play correctly on the test devices alongside as many other devices as possible. The main things we did was reduce the play area size so visibility was not an issue due to small screens. We also had to hard code the aspect ratios, anchoring and camera positions to deal with screen ratios of 16:9 and 16:10, as not all phones and tablets are the same.
    We also needed to deal with the minimal resources and the large amount of functions running and limit the amount of checks that we are doing per frame. This meant putting a soft cap on the amount of slimes active at any point in time, and building messages to send off and trigger, rather than check to see if something is true every frame, where possible. Due to there being little to no 3D elements or light particles, we are able to save heavily on rendering cycles, something that is usually dealt with using integrated graphics in mobile devices, leaving more resources for simple sprite animations and particle effects.

Self-reflection​

 

How I did as a person

As a team member this tri, I did pretty well. I made sure that I was available to work at any time, as much as I could, and employed the most optimal solutions possible, to save as much time working as I could. I did spend too much time on myself, though, and while both of my major projects have been successful, i did not put any time into side projects like i should have been doing, nor doing blogs, and for this, my grades have suffered.

While I am very loud, I have behaved reasonably appropriately, done my best to network, and correctly managed the time that I did spend working, not wasting it on less useful systems. I also organised several days of working together as a group with my cohort to ensure that as many of us as possible were able to get the more difficult LO’s done and completed, which was reasonably successful for those who were able to turn up. I have also helped some of my classmates outside of class with their work, helping them to understand it or giving them a point of reference for them to work from

How I did as a programmer

The two main projects I worked on went in different directions. for slime herder, working with another programmer, we did not write up a TDD, and only basically planned out what we needed to do before beginning the programming. The resulting code was a ‘game controller’ script with ~50 functions in it, awful naming conventions and little to no notes explaining what anything actually did. I felt a little too rushed to get the job done which led to this, along with no time (allocated by myself) to actually go back and fix things up before they got too complex and intertwined.
I also worked on the level generation for Incapacitor. I had time to plan out how this would work, and how it would meet the design requirements, and it went through a complete rebuild, meaning it was able to be designed even better. I was able to abstract it as much as possible, pulling out almost everything it does and putting that into its own function. I have put notes in for nearly every function & statement, and I feel very confident that if I need to go in anywhere and change how the generator works, I will be able to.

This second project is not how I always work, but was a goal for my self-improvement from last tri, and proves that I am capable of it. In future, I will endeavour to make this the default way that I work.