Health was added to the Main Character with creating a Health variable that change when it interacted with a script that dealth "damage". Basically, damage is the decreasing of the value assigned to the Health variable, as the health goes lower the GUI image for Health changes. Overall I have 11 images for health, when the Health variable is at 100, the healthbar looks full, when it is at 0 it is empty.
Additionally I managed to set the script that when the player ran out of health the script would send a message that would activate a function within my main player controller script that would cause it to respawn at the character respawn point with all it's health. Effectively, if the player were to run out of health or fall out of the game they would have all their health returned to them and be brought back to the spawn point. To the right is an image of the Health at 50% when the character has taken some damage from an object with the damage script. (Bare in mind this is not the final resolution for the game).
Furthermore I've managed to create a script that would add a value to the health when the character would interact with it. The object itself acts as a type of Health pickup and is destroyed when the main player touches it. Similarly the same method is used with the checkpoint as I have both object rotating in mid-air, they both get destroyed on contact but do different functions.
Below is the code for the Health-up script.
Now, in order for the character to move around the scene we need some physical colliders they can jump to and from, in other words we need some platforms! By placing a few rectangle shape gameobjects into the scene-view I will effectively be creating platforms, what I must make sure though is that they're located along the same Z-axis as one another otherwise they won't be inline. The reasoning to why these default gameobjects will act as platforms is due to their individual box colliders. These box colliders give solid-like properties to these objects when placed within the scene. If these objects didn't have the box colliders then the characters/dynamic shapes would just fall through the platforms and outside the level.
Additionally, the platforms that are touching eachother are put into the same empty parent object which has a "Combine Children" script. This script takes all of the children objects (platforms) and combines all their meshes together. Therefore instead of rendering 6 different objects, only 1 object is rendered saving memory and enhancing game performance.
So far in our game we have the attributes of the level set and a few test-platforms to inhabit it however we're missing our main character! Introducing the prototype character that will be controlled by the user, "Capsule". Capsule isn't like any other default gameobject in Unity, through a little bit of tinkering I've managed to give it the basic movement controls it needs to explore the scene. This is done through the Platform Controller script I attached to it. What this specifically does is that it works alongside with the inbuilt CharacterController component Unity has and defines the physical mechanics within the environment. Additionally, through the Inspector pane on Unity I can optimize the variety of functions the Platform Controller Script inhabits, for example I can change the walk or run speed of the character or jump height, etc.
Even though we now have a simple character moving around an environment we need to introduce the Main Camera otherwise they will just walk offscreen! We need the Main Camera to do the following:
We need the camera to follow the character but to also introduce the subtle movement when the character moves further in one direction. The camera in this game is much like the style of games such as Defender, this allows the player to easily view what is ahead of the level before they reach it.
We also need the camera to track the main character when it jumps by adding a "springiness" property to the camera script. To make it not so jumpy for the player, it is preferred for the camera to not follow the complete motion for when the character jumps. By making the camera lag with the springiness property, we can stop it following the complete jumping motion
In order to achieve these objectives we can add the following scripts to the camera object.
Camera Scrolling: Allows the camera to move how we want with a distance property and a springiness property. The distance property (z-direction), sets the distance away from the target object (our main character).
Springiness Property: Stops the camera from following the complete jumping motion. The value itself defines how responsive the camera is to the target motion.
As well as the basic camera features we also have a few camera features set to our character, this script is called "CameraTargetAttributes".
Height offset: If set to 0, target will be sitting vertically center of the screen. If set to positive number, the camera will shift upwards which would create a vertically off-centre target.
Distance Modifier: We can modify the distance of the camera from the target, this setting adds onto the value that is defined in the camerascrolling script. (This functionality is good for multiple targets).
Vertically look ahead: Defines how quickly the camera shifts to look ahead when the target is moving.
Max look ahead: In order for the character not to leave the screen the value set here is the distance that the camera will stop looking ahead. X is horozontal distance whilst Y is vertical distance.
With the creation of my title pages I am now going to start the game development process for my Final Major Project. My first goal with this level is to create an environment that I can walk around and have a few platforms to jump from one to the other. When creating a game in Unity that is predominantly 2D we need to understand the common principles that will stay consistent in order for the project to function. To start off we need to define our plane of motion for all of the gameobjects. In order to prevent this game from being three dimensional, one of the dimensions must be restricted from motion. For my game, I’m going to choose the X-axis to correspond to horizontal movement and the Y-axis to correspond to vertical movement. The Z-axis will therefore correspond to movement to and from the observer (or camera), this will inhabit no character movement within it. In order to easily memorize the planes of motion, when I click upon a Gameobject within the Scene-view, the red, green and blue arrows represent the directions of X, Y and Z.
The second principle I'm going to be sticking to is the restriction of rotation. The only rotation that will be present within the game will be around the z-axis. This is due to the camera passing through this axis, which results in clockwise and anti-clockwise rotation. The only exception that will be used for this rotation is the Y-axis so he can turn from side to side in mid-air.
In order to help enforce these principles I'm going to use an object called "Level Attributes" that has a script with the same name attached to it. This script does a whole variety of environment setting features, these features do the following:
Displays the level's dimensions within the scene-view by drawing a green bordered rectangle.
Creates a physical collider that acts as a border to stop objects falling infinitely or users going beyond where they should be going.
Allows the user to optimize the position where the physical colliders start and how wide/high they are away from the position.
Gives the character room to fall without the camera following it until it hits the bottom collide.
So now we have a script that creates an area for the character to move up and down the X and Y axis, however if the character were to fall to the boundry of this area nothing would happen. This is where we implement the Death Zone object that has the "DeathTrigger" script attached to it. What this script does is that it provides a collider which causes the character to respawn if they fall onto it, this is a great feature for small pits or the whole level itself.
From my initial Crosshair development I ended up deciding that I wanted to create a crosshair that was simple in it's design yet unique. Therefore when I was drawing up my various ideas I tried to keep my sketches geometric and easily interpeted by others. In the final stages of my drawing I highlighted a few designs that I preferred and was going to develop further in Photoshop. So when it got to working in Photoshop I did two designs that were heavily focussed around the simple shape of a circle. The image to the right displays the 2 final outcomes I ended up with, one with 4 crescent moons focussed around a circle and another with 4 banana-like shapes. I think for my final game I'm going to choose the crescent design as the 4 shapes resembles more of a hidden circle than the banana design does. Additionally I find that the crescent design is more coherent with conventional crosshairs found in all types of video games. For example, the image to the right displays the UI from Team Fortress 2, the game designers there decided to go for a more simple approach to their crosshair with a circle that has 4 spaced out segments removed, resembling an invisible X-shape through the circle.
Now in order to implement the design within Unity I needed to have a script that would hide the default windows mouse cursor and replace it with my design within the game. Additionally I needed the new design to act the same as the default cursor by following it around wherever the user would move their mouse to. Whilst trawling around on the interwebs I found a script that could be attached to the main camera (so it used the 2D GUI plane) and used a variable to display any design I wanted for my crosshair. The key issue though was that it was optimised for a set screen size but I personally wanted it to be more flexible around any size the game may find itself at. Therefore I changed the math.clamp for the position of the mouse to start at the top left of the gamespace and be the width/height of the current screen size. The image above shows the code that I used for the crosshair to be put within Unity.
To help me create the design to the information page I'm going to use the layout I used for my 3D Sound Toy. Of course the information and content will be different but the explanation of the Interface elements and Controls will remain. Firstly though, I need to develop the Interface elements that will be shown ingame and on the info page, and also structure out what is going to be exactly shown on the page itself. The spaced out text below shows the skeleton of what I'm going to be displaying to the user on the info page.
Cure Logo
Input Controls
[IMG] Use W-A-S-D to move the Main Character around the Environment
[IMG] Click the Left Mouse Button to fire an arrow in the direction of the crosshair
[IMG] Use the Space-Bar to jump around the environment
In-Game Icons
[IMG] This displays how much health the Main character has left
[IMG] This is the Crosshair for aiming towards your target.
After much spacing out and resizing I have finished what I needed to explain on my information page. As you can see by looking at the image to the right I have taken out the initially proposed "Ingame Icons" section. I did this because I felt that it's general video game etiquette to understand what a crosshair/health bar is. If somebody came across and didn't understand those icons with some trial and error they quickly would.
The other (minimised) image to the right is the code that I used for laying out the info page, the key differences to the previous info page I did for the 3D sound toy is the paragraph text I had to implement, the logo and the new exit button.
With my previous experience with Unity from the Interactive Media Authoring Unit I've started to develop the Menu system for my game. If my evidence seems lacking explanation and detail, please refer to the development blog post with Unity here.
Because I have finished developing my logo on paper, I've decided to now use photoshop to refine the final outcome of the title and buttons for the menu screen. The process to doing this meant that I was going to take the assets I drew and go over them with the Pen tool.
The image to the right displays the elixir I drew up in my sketchbook and am now currently developing in Photoshop this moment. To help me develop the background logo icon I also looked towards the Health Elixir from the MMORPG Tera.
With thorough usage of the Pen Tool and Blending Options (e.g. Gradient Overlay), I've managed to create an elixir that will look perfect for the background of my logo.
By using a simple stroke I've managed to outline the font I have chosen for the type. For the smoke I used the first 4 steps of this tutorial. To sum up, I took a shape, blurred it and liquified it after.
Instead of using the GUI font styles Unity offered I decided to use images for my buttons as they are going to be subject to change. For the Menu layout I used a similar design as I used for my 3D sound toy however as I was going to be including a logo and new images I needed to space out my elements differently. In this case I decided to space out my elements differently around a photoshop region, this allowed me to have buttons that were not confined to a box like in my 3D Sound toy. Adittionally I also added on a hover affect that this time around did not change the colour however gave the image 2 stars either side and offset the image down by 10 pixels. By looking at the video to the top right you can see how I designed my menu differently to my 3D sound toy. This time around I found it quite hard to space out and get the sizings right as it wasn't dependant on the button box I had beforehand but the flexibility of the font style.
After a simple implementation of the logo with the aide of this Unity help webpage I have more or less completed a simple Menu for my 2.5D Game! To the right is an image of how the Final Menu looks and below that is the code that puts it together.
Over the past few weeks I have been sketching out various ideas for the specific interface elements to my game. With using the Mindmap I previously drew out as a guide, I have created a whole plethora of interface elements that I am going to be selecting and developing upon within Photoshop.
(08/05/2013) During my research I found a very useful article that went into the different design stages of creating an "Organism Interface". This basically meaning an Interface that is strong enough to take the user between the plane of reality and the game's narrative. The first stage of this section talks about the creation process of the character and how they deal with enemies/obstacles that enter within the screen. From the various focus sessions I have had with my other 2 team members we have devised that the character will use their bow and arrow to attack and progress through the starting level we'll be creating. More infomation on the creation of the character will be found in the Game Design Document Mark Collins has put up together for the project, in the mean-time however I will be focussing on the various different interface elements that will be needed.
Below is a Mind-map detailing all the proposed and possible interface functions for the game itself.
(25/04/2013) The purpose of this mood board will be for capturing the right themes and patterns that I will base my interface elements on. The artistic direction for the game is going to be a mixture of decorative (art nouveau) and simple shapes (geometric) which heavily relates to fantasy/Middle-Earth Elves.
(09/05/2013) Due to the timeframe of the project overall the team and I will be veering towards using a more simplistic style over the original art nouveau choice.
(03/04/2013) As I've been bogged down with research over the past week or so I've decided that today I'll have a go at practicing creating interface elements with the Pen tool in Photoshop CS6. The last time I recall using such a tool was for Logo Design over a year ago, hopefully with a few tutorials I'll be able to re-grasp what I was taught then.
To start off, I'm going to grab a stock image of a hammer trace it with the pen tool in Photoshop. Such an exercise will get me used to creating anchor points and editing them to achieve the right curvatures. Additionally, through this type of simple practice I'll be able to grab a sense of where anchor points should initially go around an image and how I can split paths for various shades of colours. (24/04/2013)Due to personal reasons I have been unable to do work over the past week or so, the progress of the hammer however is just the handle so far. Below is the image of my progress and if I find some time I will try to finish it. This tutorial has been very helpful into researching how to use tools such as the pen tool and how to manipulate anchor points. Tools such as the Pen tool will be key to designing the interface elements needed for the final game.
These are the resources that have helped me in creating this hammer. https://www.youtube.com/watch?v=_bJSWni7Huw http://blog.rockymountaintraining.com/?p=2703 http://www.photoshoplady.com/tutorial/turning-a-image-into-a-beautiful-paint/602
(23/03/2013) As mentioned in my previous research notes into the Principles of Interaction Design,
Interface Design is governed by factors such as visibility, prediction,
feedback, learnability and consistency. Even though these elements are
key to designing Interactive experiences these principles can differ
depending on what Interactive Product you're designing. In the case of a
computer game, the experience the user has with the Interface can
change drastically depending on a number of various factors.
When
we talk about User Interfaces for computer games the term GUI is used,
GUI stands for Game User Interface. Basically GUI refers to the medium
of which the user communicates with the device (e.g. keyboard, mouse,
joystick, etc) and the interface to which the user also interacts with
(e.g. maps, inventory, options, etc). It is these types of elements that
many game developers consider as the glue that binds the user's input
to the actions that happen on-screen, without it the user cannot
interact with the game nor can they gain feedback from it.
Depending
on what information needs to be displayed at present, GUIs can be used
in a number of different forms. Generally these different forms either
display a flow of constant information whilst in-game (e.g. health-bars
and minimaps); commonly known as the HUD, or provide the user
information that is better suited outside of the game environment (e.g.
menus and options). However though any type of GUI has the mutual
purpose of sending the user the relevant information clearly and quickly
whilst able to be disposed of easily when understood.
Because
computer games have to consider the element of fiction, the principles
of user interface design differ in comparison to more general types of
UI Design. The reasoning to this is because there is an actual character in
the setting who is an invisible yet key component to the story, much
alike a narrator is to a book or film. Therefore, the real unique
quality of UI Design for computer games is that there is a varying level
that the fiction can be linked to the UI itself. Through the narrative
and the environment of the game the UI can either be directly linked to
the fiction, partially linked or not even linked at all. To better
explain this, game developers can describe the different types of HUD UI
as either being Diegetic, Meta, Spatial or Non-Diegetic.
(24/03/2013) A Diegetic User Interface are elements that exist within the geometry and fiction of the game itself. So, instead of game developers choosing to provide information by using a 2D overlay, they're choosing to display information that both the character can interpret within the story and the user that's playing the game. Take the example of Farcry 2, in Farcry 2 there are a number of gadgets that the user can pull up at any given time which take on the roles of a typical HUD interface. So in the case of displaying the time to the user instead of using a constant overlay or switching to a static menu, the character can just bring up his watch and read the time from there. Using a Diegetic interface is an interesting method of enhancing the narrative experience of the character whilst also providing the user an experience that doesn't stray away from realism. In terms of storytelling, having the character interact with the environment whilst also giving the necessary information that the user needs is a great method of creating immersive gameplay, however though it doesn't come without its drawbacks. There are many cases where game developers have tried to use a Diegetic Interface but have actually negatively affected the experience of the user through response times. Games such as Metro 2033 or Fallout 3 use animations to transition the character interacting with the device that gives the user the necessary information, however though if there is a waiting to time to get to that information this can frustrate the user over the course of a long game. The image to the right is from the 2008 RPG game Fallout 3, it displays a few snapshots of an animation of the character taking the PIP-Boy (Personal Information Processor) towards their eye view. Even though this animation only lasts for half a second, the user is continuously using this device to retrieve information, therefore they have to sit through this animation countless times over the course of an 80 hour game. Instead of immersing the character within the game, using a long response time in favour of functionality can actually alienate the user from the experience of the game itself as they're constantly waiting to make progress through the process of slow moving geometry. (25/03/2013) Diegetic User Interfaces can also be implemented within games that use a ficticious time period as their narrative setting. For example, FPS games set in the future can use diegetic patterns that represent HUD features (e.g. health and ammo), through narrative objects such as helmets. Take the example of the most recent Syndicate DART game, within Syndicate DART there are a number of UI elements that can all link to the futuristic and technological narrative that belongs to the game istelf. Features such as the highlighting of enemies and showing ammo, serves as information that can help the user whilst also fitting within the storyline as the technology to show them is available to the characters. Other examples of diegetic solutions within a futuristic setting can be the holograms that are used to represent text books in the 2008 game Dead Space. So, instead of breaking the fiction by using a 2D overlay or cutting to a paused menu, Dead Space uses a UI that is explained through the medium of holograms that both the user can interpret and the character ingame can see.
Even though Diegetic UI elements are a powerful tool to immersing the user within the storyline of the game itself, there are times when it can seem innapropriate to implement them. For example, in some cases Diegetic UI elements are illegible within the geometry of the game world, this can be due to the fiction of the storyline not backing up the functionality enough or even the case of obstructing information especially within a 3D setting. Another reason to why it's innapropriate can be the need to break the fiction in order to provide the user information that the character may already know, but the user may not know.
In the cases where Diegetic UI elements cannot fit within the geometry of the game world, developers can still maintain the game's narrative with using elements that sit on the 2D Hub Plane; these are known as Meta elements. One of the most common uses of Meta elements can occur in popular mainstream FPS games such as Call of Duty. Instead of the commonly used number-based health-bar functionality, the developers at Infinity Ward decided to use a 2D Plane to display how hurt the user is through blood spatters/veins on the screen. By looking at the video to the right we can see the progression of how the character is becoming more and more vulnerable whilst being attacked. If he was to carry on being shot, the 2D blood plane would increase and
eventually turn into a grey overlay with a tilted camera shot,
indicating the character has been killed.
(27/03/2013) Even though it can seem that Meta elements are best suited for First-Person Shooters, there have been other game genres that have adapted that type of User Interface too. By looking at games such as Grand Theft Auto IV which has a third person view, we can see Meta elements through in-game functions such as the mobile phone. When Niko Bellic our protaganist receives a phone call from an NPC, the user is met with a 2D image of the mobile ringing with the option for the user to answer it or decline the call. At first, the interaction of this UI element can be considered diegetic, however because the element appears on the 2D Hub plane it is Meta. Because Meta is dependant on being linked to the narrative it can be quite difficult to define in non-FPS settings such as racing or sport games. By looking at the image to the right we can see at least 6 UI elements situated on the 2D Hub plane, however only one of these can be strongly linked to the narrative of the game, this is the speedometer in the bottom right corner. Because the speedometer is a feature that you would generally find within a racing car (both functionally and aesthetically in this case), it is highly considered as Meta as both the character within the veichle and the player would interpret this information exactly as portrayed. However, the other 5 UI elements within the HUD may not be considered as closely linked to the narrative as it can be quite hard to tell whether or not the character would have that exact information as shown in the image.
Diegetic and Meta UI Elements are the glue that is needed into creating immersive gameplay which is heavily intertwined with the narrative, however, not all genres of computer games need or even require the UI to be connected to the storyline. 3D Visual aides is a great example of UI that provides information to the user within the geometry of the gameworld, however aren't to do with the storyline at hand; this is known as a Spatial UI element. (28/03/2013)Because Spacial UI elements break away from the narrative, they should only be used to provide the user information that the character may already be aware of. Take the example of a character trying to get from point A to point B, they may know exactly what path to take, but because the user is unfamiliar to the terrain they don't know. This is the cue for a UI element to be implemented within the geometry of the gameworld showing exactly which direction the user must take in order to achieve their goal or progress within the game. Even though Spatial UI elements are a great method for keeping the immersion of the user, as opposed to screen menus, they still break the fiction of the game. This is why when implementing this type of UI the developers should keep to the fiction of the game as much as possible in order to not break the immersion of the user.
(29/03/2013) Fable 3 is a great example of a third-person game which uses Spatial UI elements in order to describe the direction for the user to go. Instead of breaking the immersion by using a menu-screen, the developers decided to keep to the fiction by tapping into to the magic aesthetic quality of the game with using a "golden trail". By looking at the image to the right the character ingame is met with the trail in order to lead them to the treasure. However, the purpose of this golden trail is not to directly support the storyline of the game but to lead the user to the objective if they're unsure of which path to take. Therefore, the existence of this trail is not registered by the character ingame but by the user in order to provide more information. This type of 3D Visual aide was similarly used in the recent installment in the Bioshock Series, "Bioshock Infinite". If the user was found to be lost within the setting of the gameworld by pressing N on the kayboard a 3D arrow would pop up briefly on the screen to lead the person to their objective. Even though the aesthetic quality doesn't match the fiction of the game, by using a briefly shown arrow not only does it emphasise functionality (pointer of the arrowhead) but it also doesn't break the immersion too much as it's not constantly there. The issue with using a constant visual aide such as the one in Fable 3 is that it can feel particularly uncomfortable if there is something physically and constantly within the gameworld whilst not supporting the character's acknowledgement in the slightest.
Games which don't have a main focus on a storyline may benefit more with Spatial UI elements as there is a larger aesthetic choice to which they can stylise the information for the user. Because the developers don't have any physical characters within a setting, they don't have to cater to the level of fiction and can solely focus on the appearence and functionality of their UI elements. For example, the styling used in racing games such as Forza 4 demonstrate how Spacial elements can be beautiful pieces solely put within the geometry of the game. By looking at the image above we can see a range of simple interface icons and clean font types that match perfectly with the rich 3D qualities of the game. With using such a clean and simple style not only does it serve its purpose to relay information to the user, but it can still allow enough room to show off the environment without being too cluttered.
(01/04/2013) Last but not least, if the UI has no place within the gameworld and no actual correlation to the storyline itself, then this type of user interface is known as Non-Diegetic. When we talk about Non-Diegetic elements we can specifically refer to the majority of interface elements used in previous generations of computer games. Health bars, mini-maps, ammo stats and weapon selections are all prime examples of Non-Diegetic elements used over the past generations. The main aspect to remember about this type of user interface is that it's not renedered within the gameworld itself and that it is only visible/audible to the user playing the actual game.
What I believe to be the key purpose of Non-Diegetic elements is that they serve solely for the interactive experience of the user playing the game. Aspects such as functionality and UX (User experience), I find is a lot more important to focus on initially before trying to intertwine user elements with the fiction. Take the example of the 2007 MMORPG Runescape, even though its interface has been stylised to the narrative of the game the majority of user elements are still Non-Diegetic. By looking at the image to the right we can see a user interface that has been completely rendered on the 2D plane and outside of the gameworld. The only elements to what I can see are rendered within the gameworld are health-bars and hitpoints that have been spatially placed over the characters in the background. Within this image there are no elements that are Meta or Diegetic, this could be due to the fact that the game is solely based on experiences between online players and a story that is mainly expressed through NPCs and text onscreen. If this type of game was to be a single-player experience then I could understand the need to implement more storylike user elements (e.g. instead of the backpack being displayed in a grid, it could be shown within a bag). However, because the game is based on experiences with real life friends/enemies I'm guessing the developers needed to focus more on functionality in order for the user to react quicker with other online player actions. Take the example of another user trying to kill you, it would be a poor experience if you had to fiddle around with opening your backpack whilst trying to get items to fight them back with, rather than actually fighting them back in the first place.
The fact that Non-Diegetic elements could be best suited for online-gameplay doesn't mean that they're always implemented within them. Games such as Anno 2070 do adopt Non-Diegetic elements however it can still be both single player and multiplayer. (03/04/2013)Anno 2070 uses a rich 3D experience accompanied with a minimalist user inteface that has both bold icons and simple type. The reason for this interface to be designed in such a way is because games such as Anno 2070 are heavily based on functions and dynamic game-changing actions. This genre of gaming can be typically known as God-games or RTS', usually the interfaces are non-diegetic because effectively the user is treated as the protagonist. They control a lot with what happens within the game therefore it only makes sense if the interface was non-diegetic because of the amount of actions the user will be performing that can change the gameworld drastically.
Earlier it was mentioned that many games over the past generations adopted Non-Diegetic interface elements; this was probably due to the fact that when developing games a lot of mathmatical equations were involved and room for fiction was quite limited. The image to the right is a screenshot from the popular 1992 game Wolfenstein 3D. The interface it uses is nearly completely non-diegetic as most of the information is only displayed to the user in quantative data. For example intead of the screen turning red if the user was shot, the health percentage would decrease as the user receives damage. This is how a lot of old game interfaces used to look before the technology was available to create more immersive gameplay within the geometry of the gameworld.
(27/04/2013) Overall we have seen how far Game Designers consider the element of fiction within their GUIs. Whether the UI is included within the geometry of the gameworld or lying on the 2D HUB plane and whether it can only be seen by the user playing the game or the character too. Even though some may consider that the more Diegetic elements that are involved strengthens the connection between the user and player, others still look towards the neccesity of Non-Diegetic elements as their role is to serve functionality above all. This is where we look towards the principles of developing a GUI.
When designing the UI for a video
game there is always a pull between what would be easier to use and what would
be better for the story. If we want to create either a mostly Diegetic or
Non-Diegetic User Interface we must understand what each one does for the game
itself. Effectively a Diegetic user interface brings the user closer to the
game by pulling them further away from reality, whilst a Non-Diegetic user
interface does the opposite by grounding them and reinforcing what we know as
the “fourth wall”. So, why would we not want the user to be immersed within our
game? Well, if we were to remove non-diegetic elements from game-design “law”
then it would make it easier for the developer to make the user lose themself
in the game via immersion. However making an element of gameplay experience
more diegetic comes at the cost of usability, and thus by having a negative
effect by pulling the user further from the experience. Take the example of the
Fallout 3 animation I mentioned earlier, pulling up your arm with the PIP-Boy within the game is
actually more difficult than it would be in real life. The specific problem in
this case is that a Diegetic transition has been chosen over functionality and
therefore making something mundane and repetitive more time-consuming. On the flip side of the coin though, it may not as be entertaining or memorable for the user to be taken to an inventory screen outside of the game-world, especially as a device such as the PIP-Boy has such a high popularity within the gaming community. On the
whole though, Diegesis is something that all developers should consider in the course
of making a game, whether a single element would be better suited with the
narrative or separate from the game world itself. By enforcing this balance
between immersion and usability, game designers can ensure that their game will
be both entertaining storywise and easily usable when released.
The balance between Diegesis can also intertwine with one of the first basic principles of display-design; never sacrifice the appearence of an interface in placement over efficient information delivery. How the screen looks in terms of colours, icons and layout shouldn't be what a UI designer should be focussed on when they first develop an interface. By figuring out what decision the user will have to make at each point within a certain task the designer can focus on what information is required to be shown on the screen when the decision is made. So instead of creating a pretty layout for your game first-hand, by making the basic prototype with simple shapes and patterns a designer can really get down the foundation and structure needed for the interface before any stylisation. It is this basic principle which can save a designer from making further errors. For example, a common design error is to have a new window/dialog open for every aspect of a task, this can lead to a very cluttered display and the possible consequence of making cluttered procedures. A better approach to sending information is by having it always visible onscreen as eye movements are a lot more faster than waiting and bringing up new windows. By having a well-planned layout of all possible decisions before appearance, it is actually possible to present a lot more information/controls than initially realised.
(02/04/2013) Information and controls are also dependant on what form they're delivered in on screen, generally this can be either with icons or just plain simple words. The current trend with Graphical User Interfaces is to use an abundance of nifty icons instead of words, this is due to the fact that icons are generally more desirable to click as they have a dense surface area and are purposefully compact. Even though the comparison, words, can be quite long and thin which often provides a small and harder to interpret target, they can still be more useful than icons. Often enough Icons can be quite arbitrary and meaningless if their item concept is abstract and doesn't fit in with the general etiquette of interactive media products. A recent enough example of this can be the 2013 Simcity game, with the image above we can see a whole plethora of icons used to represent controls such as building categories and data maps. The issue here though (especially with the data map) is that the user would have to play the game for a long time with enough trial and error to understand the icons, why should we make the user memorise a meaningless symbol that may not even be used within any other interactive product? This is why icons tend to work best when closely resembled with other familiar objects that are used everyday physically or within other interactive products, for example a brush icon within a image manipulation program can be easily associated with a paint brush.
The principles of developing a GUI, especially for games, is that they will change depending on the genre of the game. Below are a set of annotated images displaying the general interface elements used within various genres.
Quite a few of the aspects that make an interface bad can be found in my research into the principles of Interaction Design, here I go into more detail about inconsistency with interfaces that can be unresponsive and contain too much or too little information. Additionally, I also mention the usage of typography, colour and styles that make an interface look bad, click here for the link to the blog.
Common Interface Layouts For Video Games (Not all Interface Elements listed) First Person Shooter
- The focus point is centered, your gun will always shoot towards it.
- Health generally on the left and Ammo on the right, other related elements are usually on the edge of the screen
- Semi-Transparent, Minimal interface used in order not to clutter gameplay.
- Other inventory overlays generally show up when activated (e.g. scrolling shows the weapon choices to the side of the screen).
- The general layout of FPS games, people who play this genre expect this typical interface structure.
3rd Person
- Requires precise targeting due to player movement dependency.
- Health top-left, actions top-right, other information bottom-left and Mini-map bottom right. (Common layout for 3rd Person, e.g. Zelda)
God-games/Simulation/MMORPG
- Dealing with various different statistics and objects
- The majority of the interface focuses on the management, this is easily distinguished from the action of the game.
- These types of games tend to have the biggest and most complex interfaces.
Intro Working Title: Cure Within
the next 13 weeks I am aiming to help develop a 2.5D game by
programming the game itself and designing the user interface. In order
to do this I’ll be using frameworks such as Unity 3D to compile the game
itself and Photoshop to design the various interface elements that will
be used in the final game. Throughout the length of the project I’ll be
grouped with two other people who will be focusing on the 3D design of
the environment/characters. The title of this project is “Cure”; the
story itself follows the adventures of our main Elfish protagonist,
“Rinako”, who is trying to find the remedy that'll heal the illness that
has plagued her species.
Influences, Starting Points and Contextual References
Throughout
my gaming history I have loved to play games that are both unique in
their aesthetic style and unique with the gameplay it portrays to the
user. Quite often I find myself glued to games that give endless
possibilities to the player either through the various obstacles they
encounter (online gameplay with FPS’) or the different themes they touch
upon (Non-liner RTS/Sandbox games). It is with this influence of
unique, immersive gameplay that I want to help create a game that is
both achievable in the time frame given yet challenging at the same
time. The type of genre I have chosen with my group for this game
is 2.5D, basically the gameplay itself will be in a 3 dimensional form
yet restricted to the 2 dimensional plane (X and Y). Occasionally the
main protagonist and camera will differentiate along the z-axis when
going through certain obstacles, but generally the gameplay will be
fixed along that co-ordinate. This type of genre and gameplay is heavily
influenced from when I’ve played games such as Trine 2. Trine 2 allows
the players to take control of three main protagonists who all have
various different functions that can aide the player in getting across
obstacles. In the effort to not re-create a beloved game, I’m going to
attempt a unique experience not with the type of characters the player
can toggle through but with the different methods and directions the
user can take the main character. For example, getting over a gap can be
met with swinging over it, using an object, jumping over it, or any
other technique that may be at a player’s advantage with using the
environment around them. One of the starting points I can take off with
Unity is the 2D tutorial that they offer on their website. The
artistic style I’ll be taking along with the group in this game is a
pretty simplistic one made up of basic shapes and pastel like-colors.
Similar to the art style of indie games such as Pid, I’m hoping to
create a User Interface for the game that isn’t too bold or intrusive to
the gameplay itself. In order for me to design the HUD I’ll be trying
out various shapes and styles in my sketchbook then eventually
transferring them for further editing in Photoshop. From there, I will
be able to communicate with the rest of my team on whether or not they
approve of the certain assets I’ve made and hopefully they can be
implemented. Once implemented I’ll be programming the GUI in Unity so
that it will function in the way I need it to. Indie games such as Pid
and Trine 2 have heavily influenced the style of Interface that I’ll
create; the clean yet majestic style is what has really inspired me to
focus on Interface Design for the Final Major Project. In terms of
previous experience, I have had the opportunity of creating a 3D game
with Unity in the games-engine unit I did late last year. Even though
the time-frame was smaller, the group I was in managed to create a game
level with simple obstacles and teleportation. Although we’re not
improving the previous game, I’m hoping in this new project I’ll be able
to create a level both functioning perfectly and aesthetically
pleasing.
Intended Techniques, Media and Processes For the Final Major Project I’ll be using a whole range of real life tools (such as sketchbooks and pens) and programs that offer certain frameworks to help make, refine and finalize my idea. After conducting thorough research into the artistic theme of the project and the principles of User Interface within Video games, I’ll be using my sketchbook to draw a variety of shapes and patterns to create the icons needed to interact with the environment. For example, aiming can be quite a difficult task for the player especially if the gameplay is locked to 2 Dimensions, for us as developers to give ease to the experience Interface elements must be introduced. Therefore, in order to create these interface elements I’ll be using my sketchbook to conceptualize what they should look like. In order to refine the elements I’ll be using an image editing software (Photoshop) to manipulate and change what I’ve made so that it’s easily understandable and ready to be imported into the game. However though the usage of my Sketchbook will be minimal for when I need to put the game together. In order to put the game together I’ll be using a range of scripting tutorials that support the usage of Unity 3D, which is easily available at home and at my college. Unity 3D aides the user in compiling various JavaScript files in order to create the flesh of a game’s functionality. Without it creating the game from scratch would be hard without the resourceful framework that Unity offers. Timescale Pre-Easter (15th – 22nd March)
Design Proposal.
Mapping out level of work for each week.
Easter Holidays (22nd March – 12th April)
Research for Interface Design within Video Games and creating games within Unity 3D
April (12th – 26th April)
Drawing out resources and creating Interface elements for final product.
Researching into Unity and starting a working prototype
Spring (26th April – 10th May)
Compiling the assets together in Unity to create a working game level
Summer (10th May – 6th June)
Final Evaluation and Extra time for refinement Methods of Evaluation
Usability testing (Messageboard websites, Focus Group with classmates)
Daily diary of progress made and problems handled
Research Blog Posts
Analyzing similar projects
Critiques with Tutor