This basically shows different ways to layout the code so programmers can read it easily and make sense out of it. 1TBS shows the opening brackets connecting to the end of the first line of code which could represent the literal connection whereas Allman separates creates a new line for a new bracketing section… like a quarantine in a way. Keeps everything away.
Doing either bracketing will not affect the workings of the code of what target you apply it to. Ultimately this should show what code is supposed to look like and also show an easier way of bracketing the code. Below is an example of 1TBS bracketing, I wouldn’t expect you to read it all but if you want to, I highly recommend looking at the way the brackets are set in comparison to Allman.
Back to the game in hand, there have been major improvements such as including collectable objects in my game to finally trigger the door, the idea.
Get the Collectables.
Avoid the Enemy.
Get to the goal.
Simple, it’s just… making more levels & having the gameplay flow at an equilibrium alongside an increasing difficulty. Throughout the duration of the game…
Working under restrictions can sometimes call for planning, compromise and creativity…
I have been set on a task to create a 3D model as a practice project leading up to the world skills event; a treasure chest is my first and by the time you are reading this I am still making progress with this model, just need to start texturing.with the Uv maps and hopefully learn more about PBR texturing.
The use of spikes and a large set of horns gives a hint of character to this model, but the horns could also be vines or tentacles which consequently can lead to confusion upon the eyes of a player, modeler or mere inspector.
To create the horns, I had a chance to try a new tool which I had looked into. The NURBS curve tool, this tool allows me to draw out a more accurate line for my polygons to follow through elevation. Additionally, these lines allow me to be more interesting with the wackiness of my poly’s…
moving on to the Texturing in Photoshop… after an hour of unfolding the entire model including the external assets. I learned these new skills/tools;
After I finished the UV editing, I found that with the new tools and skills I developed during that time and that UV unwrapping doesn’t take the duration that people take it for. It mostly consists of a lot of prior planning and is easier to do when there are external assets which leaves out the worry of having to cut out odd seams where you can alternatively have more straight forward UV layouts.
Further venturing through the paths of game design and coding my way through the basics of my top down stealth game.
I start by creating my assets first, such as a protagonist ‘main character’, walls, enemy, a backdrop for my game, all I need now is a goal and I can reach further development. I create my assets all on Adobe Photoshop. By using the brush tool and tampering with the brush settings, I can give out better fake lights on the assets giving out a 3 Dimensional look without altering the lighting of the scene nor the renderer.
Beginning my development on Unity, I made the mistake by selecting the game to be a 3D game, I had to therefore change the camera to ‘orthographic’ so of can therefore render the sprite images all together.
To conclude this blog I have also considered having a feature where the player must collect a specific object to therefore pass except the first stage which the player should be able to get to grips with the controls and the environment around him/her.
This piece of code is for a projectiles script which in simpler terms means ‘When shooting each bullet has two seconds until it is dead’, sounds odd… Bullets don’t die, but it does cease to exist after leaving the screen from source within two seconds.
I have been progressing ever so slightly with my I dent animation developing and refining it with minor changes by using the motion graph editor to help me with ‘block animation’ which also means step-by-step animation to help me get those main poses or key-frames just right…
I’ve also been meaning to redo the lighting to either expose the scene and enhance the aesthetics of the scene. In other words basically bringing out those nice colors and making everything look spectacular. The method I thought which was best using was the three-point lighting method this will contain a main light, filler light and a back light that can bring out much more of the model or the scene.
During the pipeline of development, I was told about a new renderer called TURTLE. Turtle, like mental Ray can enhance and improve lighting by taking characteristics of lightning from real environments and apply them to your own scene which overall can inject realism but on an uncanny level which also introduces the term ‘uncanny valley’.
Uncanny valley – A term used when an event or a scenes characteristic turns out to be not fully realistic.
After all the lighting was complete I then moved on to cameras and the animation of the cameras, for this I did not need to use block animation. Instead, I switched over to the camera view which I wanted to animate and by moving myself to where my assets looked best I could insert a key frame, this would show off my version of what I wanted myself and the audience to see.
The only problem that I faced during this task was trying to get timing right, but I learnt that by going through the timeline I can pick out key points in the scene and position the camera to where I want when I needed. By doing this I found it a lot easier and a lot more successful to improve camera animation within Maya.
This blog is to show my current work for the robot design which has also introduced me into more features embedded into Photoshop. I started with looking at specific robot sketches on pinterest and taken interest with the design of which I can bring into my own sketches.
I first started with the basic measurement sketches where I can start looking at how big each, portion of the robot will be throughout the drawing. This can massively help me to plan forward and prevent hitting a brick wall during the main outline of the character.
The main outline of the character was filled around the measurement sketches so I can therefore make sure that I can plan out what can be added. This can also give me a guide to what I can exaggerate and keep real to an extent. As you can see the shoulders are really exaggerated in comparison to the feet which looks more relatable to the characters primary mass.
Measuring + Outline
I then move on to colouring, I start with the primary colour of the robot covering the entire drawing. By doing this, I can overlay this colour and not needing to worry about asking mistakes to the base colour of the robot, to make sure I have no mistakes and I can go back to where I have come from. I make external assets such as eyes on different layers and the shading on another additional layer.
This is something I have wanted to post for a while as I have also wanted to do this type of small project and thanks to a few lessons in VFX, I have finally done it. And to be fair it was easier than I thought.
Our recent lesson was a catch up on 3D camera movement and tracking points, sounds like mumbo Jumbo let me show you what I mean.
First off I recorded a small video of my partner looking directly into the lens and by keeping still, I can track the eyes movement frame-by frame so the tracking points can hopefully stay where they are placed rather than moving all over the frame.
The golden position for attaching a tracking point is to having it where there are two completely clean breaks in colours. By doing this the trackers should stay close to where you assign them.
This is what you should have after running the tracking system…
These small compacted red dots are exactly what you need when tracking movement, this will make applying the effect a lot more easier on the animator and the actual ‘Null’ which the motion has been applied to.
Finally you would need to create a solid within your timeline and use the pen tool to cut out the exact shape you want, in this case I wanted to cut out the solid shape to the colour of an eye but only the inner part or more scientifically the ‘Sclera’.
Yes I did look that up to sound smart…
Anyway, after all that was finished. I had more time to experiment, looking at the entire scene. It looked too nice and original to have such an eerie looking eye so I then continued to add adjustment layers such as a black and white filter and also apply contrast settings which gave my partners face a more natural look in connection with the eye itself.
This is the result of only 15 minutes work which is really good considering I havent had much experience with tracking.
From the point of writing this blog I would have just finished coding a game for a Unity 2D game where we had to learn the basic data types which are;
Integer data type
These data types are used to control the tools you use and the specifications of the physics engine or the ship alike. I had also learned that ‘if the variables aren’t public, they are out of order’. And all we had to do was to add the word ‘Public’ at the beginning to make that set of data accessible to whomever has access to the code and who ever is playing the game can experience the effect.
Doesn’t make sense? lets put it this way.
If I wanted to tinker with the gravity feature in the rigid body component, I would therefore have to change the rigid body to a ‘Public ‘ rigid body. Sounds simple going form a programming perspective, we then moved on to adding engine thrust and ship rotation. Tis is all we had done up till the end of the lesson, so as I am writing tis blog alongside many others. I am also reading through my code and revising the different types of data I can use in other projects such as my walking simulator and future projects on other software.