Long time, no see (November 2013)
Since my last post, way back when, there have been quite a few developments.
First off, my Ludum Dare results came back.
Sadly, while I was happier with the resulting game, the results were lower this time than my last submission:
Coolness 79% (up 11% from #23)
#466 Humor 2.40 (down 0.89 from #23)
#847 Theme 2.83 (down 0.47 from #23)
#890 Fun 2.50 (down 0.03 from #23)
#923 Audio 1.50 (There was 0 audio in the games so…)
#951 Mood 2.16 (down 0.52 from #23)
#994 Innovation 2.30 (down 0.51 from #23)
#1016 Graphics 2.10 (down 0.12 from #23)
#1068 Overall 2.43 (down 0.18 from #23)
Last time there were 1402, this time 2213. So landing in the top 25% for humour, top 40% for Theme and Fun, and the top 50% for the rest of the categories so from that I guess I should be pleased, but then there’s no getting away from the fact that my scores were 100% < 3 out of 5. Which is a failure in my mind.
Ludum Dare #28 is coming up in < a month’s time. With the recent release of Unity’s 2D tools, I imagine that will cover the majority of submissions. Which is understandable, I’d certainly do the same thing; it’s a great engine for rapid prototyping, but it does mean there are less submissions that test the programming abilities of the people taking part, which is one of the main parts of the competitions. The games are less impressive if the majority of the leg work has been taken care for you. With this game I had to devise my own methods for switching between games, having a menu etc. In unity, that’s all there already. It could mean more tweaked games are produced but it hides the true display of skill, which would be a shame. I however, won’t be participating as I imagine that I’ll be too buried under with work.
In the time before starting university, I had a lot of spare time. I decided I actually would like to carry on working on Wee Paper Planes. So I informed the team, if they wanted more work to be done I was able to do it. I got little to no response. I asked for the most up to date build to be uploaded so I could do work on it and that took forever. I was annoyed at how little communication there was. I started my masters course (which I’ll get to later) and so had no time to spend on WPP. Later I get an email from Iain saying Tiga are accepting submissions for the student category of their award ceremony. I look at the terms and conditions and sigh, the game needs to have either been released or be expected to release by the end of November. There was no way that was happening, but suddenly the team were speaking again, “lets apply!” they cried. I pointed them in the direction of the T&Cs and they started discussing releasing a minimal version. I annoyingly pointed out how I couldn’t work on it, as I’d told them plenty of times, and they suggested getting other coders in. I was really annoyed, but it eventual fell that they weren’t going to release it and so nothing else was said on the matter of submitting the game.
A month later I receive news that our game had been nominated for the award. What the fuck I asked the designer who I knew had applied. It was last minute apparently (the “goto” excuse on their part) and they decided to go for it anyway. I of course wasn’t brought in on the decision, but that’s only because it was last minute…
I knew from my mate who had received a Tiga award that they offer tickets for the nominees to go down to the ceremony. I awaited, foolishly, for such a notion to be brought up to the team. Nothing of the sort was mentioned. I then find out, on the day of the ceremony, that the two designers have gone down to the ceremony. They never decided to tell the rest of the team. When they came back up, the reason they never told me was because it was… Last minute. Oh fucking aye. I was really annoyed at them, their communication is unnecessarily terrible. Regardless, the outcome was that we won the award. This is certainly great, having a game that I worked on win a recognisable award is amazing to have on my CV. However, if we don’t release the game then we void the T&Cs and have a mark on our professionalism. I pointed this out and they just didn’t seem to care.
Later, I found out, from a different member of the team (not the actual lead), that they were planning on working on it to get it to release asap. They’re actually taking the piss. There is no justifiable reason to keep me out on such a decision. Even if they’re planning on dropping me an bringing in other programmers, they should at least have the decency to let me know instead of going behind my back and then stabbing me.
Going back to mid-September, I started my Masters of Professional Practice in Games Development. I’d so far been unsuccessful in getting a job. Something I, perhaps arrogantly, hadn’t anticipated. This was quite stressing as my budget was quite tight, I had worked out based on having a job that I would be fine to participate in the course. Being unemployed however, created a large flaw in my plan. For the next 2 months, I would be living on the bare minimum, receiving at best “thanks but no thanks” emails from jobs I’d applied for.
Mixed in with this stress, the course itself turned out to be much more difficult than I had thought. I’m not an idiot, I knew it was going to be a tough 52 weeks. I tried to prepare myself for a grueling 12 months work, but it still wasn’t enough. I learnt in the first week of how we would be using marmalade for our semester 1 game and then either UDK or PhyreEngine for the other two semesters. Not news I wished to receive. I had no experience with Marmalade, all that I knew of it was the horror stories from 3rd year of the endless issues the coders faced. As if that wasn’t bad enough, the threat of my own 3rd year nightmares - PhyreEngine - was unnerving. It’s been 2 years since, but from talking to MProf students of the previous year, it’s not got any better. Starting on Marmalade, I quickly learnt for myself just what a horrible clusterfuck of a framework it is. It tries to provide a platform to create games for the ridiculously fragmented smartphone market. However it’s convoluted and quite involved. To make matters worse, at the time of starting, the documentation available was 5 years out of date and even that was terribly put together. There was next to no help online bar some equally out-dated tutorials. From these bare parts I was able to start putting together test applications. I had to test what I could do as soon as possible. I knew Marmalade could handly 3D so tried that first. However I hit a brick wall with an error that I couldn’t solve, I asked around the other coders in MProf to see if they had similar issues. All of them were either using the 2D API or the provided example game engine (which was terrible). I decided to drop 3D, it would have only added to the work load in the long run. Alongside learning the 2D API, I also was learning Box2D, which was euphoric in comparison to Marmalade.
Throughout the next couple of months, I continued to fail to get a job and the work load at uni kept increasing. I was getting unbelievably stressed out. I was approaching the point of no return, where I just simply would not be able to afford the heavy tuition fees and living expenses. Iain emailed me to pass on an application to a company in Dundee that deals with satellites. That’s cool I thought, I read through the requirements and was dismayed, almost all the shit they used I hadn’t touched. They did say however, that they weren’t expecting people to know it all, they just wanted good programmers. Well. “Good programmers” is a subjective term… But even still I guessed I wouldn’t be good enough to work on embedded systems for fucking satellites. I thought about how I’d applied to bloody McDonalds, the lowest of the low, and figured, what do I have left to lose? So I applied. One of the items mentioned in the application was Unity, so naturally, I told them about my time with unity and Wee Paper Planes.
It seemed to do the trick, they asked me in for an interview. It so happened that the interview day was the same as the Tiga award ceremony and so I wouldn’t have been able to go, but that didn’t detract from my annoyance about not being asked. In the interview, I talked. I talked a lot. Perhaps too much, but nerves got the better of me. I found I wanted the job so fucking bad, perhaps to prove I was more capable than I gave myself credit for. I talked about myself and what I’d done, I explained my misunderstand of the “Unity” software to which they were referring to; In preparation for the interview, I’d looked at all the software they mentioned. Unity had stood out as an odd thing to include amongst everything else, but I figured it was down to it being easy and quick to get something up and running. One of the other items was CMock. I looked at this and came across “Unity”. A unit tester. Fortunately, this misunderstanding drew a humorous tone with the 2 interviewers. The interview went on for over an hour and I could barely speak afterwards. I’d definitely done as good of a job as I felt I could. I left immediately feeling good but that soon drained as I considered that I wasn’t the only interviewee. There are so many impressive coders at Abertay, and I imagine equally so at Dundee Uni, so what chance did I have?
Well a pretty good chance it turned out. Not 45 minutes after the interview did I receive an email telling me I got the job (and then a few hours later I got a Tiga award, a good day in my books). I couldn’t believe it, all the stress was broken out of and I felt fucking amazing. I learnt later that I was right in my assumption that there were more capable coders who applied for the job, however one of the interviewers would be working in the Dundee office and wanted someone they could work with. The more capable guy gave the impression he’d go off and do things his own way with no communication. So I guess not being a know it all worked in my favour.
I probably can’t talk about the specifics of my work at Bright Ascensions, but what I can say is that it is amazing so far. Hard, no doubt, I have to learn a lot in a short amount of time, but the idea that code I make is going to end up in space is an amazing motivator. I still cannot believe the turn around in my luck. I’ve gone from stuggling to find a minimum wage job doing some shitty work that would’ve probably made me suicidal to a rather well paid job, that will be amazing for my CV (working on embedded systems, rapidly learning new tech, actually having to take on a good coding standard, the list goes on) and to top it all off, I’ll have code in space. I mean fuck. I love space. Space is me absolute favourite thing to think about. After perhaps chocolate and kittens. I have now false hopes about making it into space; it’s unbelievably unlikely. But getting something I created into space? That’s the next best thing, right? I could look up to the sky and know something I did is operating up there. It’s a thought that’s been assured by my colleague but is still too hard to believe.
There’s no doubt that the amount of stress in my life hasn’t decreased, in fact it will more than likely continue increasing to health declining amounts, but no much more worse than the inevitable health decline brought on by my chocolate intake. The difference is the type of stress. I was not coping well over the last few months because I knew I was unstable. I had no control over how my life was going. I’m organised, I’m a planner. I plan everything. I’m uncomfortable with being spontaneous, but I plan so effectively that it can sometimes appear spontaneous. Not being able to control how badly my situation was spiraling out of control was not a good place for me to be in. Having a fuck tonne of work to do? Fine. I actually partially enjoy that sort of stress, it’s good to have things to do. But either way, I can control the work. I know what the work is so I can adapt the plan to what needs to be done.
One final event that I should probably document is the Tiga game hack that I participated in. This was a mere 24 hours, but I was up for it. I decided I would use Unity instead of Gosu, so I wouldn’t have to lug my laptop to Uni. Unfortunately, the hack fell on the weekend of the alpha deadline for my masters semester 1 game, as well as when I had a mid-term exam for maths, as well as the week I started my new job. Needless to say but it was quite hectic. It turned out there was noone available that I wished to form a team with and so I went it alone. I realised, perhaps a little late, that I had no idea how I was going to do the sprite animation. Shit. Fortunately, a few days ago was when Unity added their 2D functionality. Not so fortunate was that this was useless to me working on the Uni computers where the Unity installed was a good 3 builds old. The theme was childhood. Immediately, my thought was the ground is lava. Who didn’t play that as a kid? However I quickly decided I’d need to run home and get my laptop and put the most recent Unity on it. I was annoyed because by this point I’d wasted a good couple of hours. On my way out, it happened that Pixel Blimp, with whom I made Wee Paper Planes, had managed a later entrance. The tickets sold out quickly so a lot of people didn’t manage to make it in. With them now in however, I was open to join their team. Fuck it, I thought, we can make something more impressive in a team. Their idea was “Orbital defence” where you simply orbit a planet and fire at oncoming asteroids. That’s ok but where’s the theme. They had the shooter as a kid in a cardboard ship, but that wasn’t good enough. I provided the back story where the kid is watching TV and on it is a talk about a meteor shower (their’s plenty to choose from) and the kid interprets this, with his vivid imagination, that it’s actually earth being in danger, and so he gets into his cardboard ship and protects the earth. As with WPP, the designer had a very specific control scheme in mind. You click the left side of the screen to rotate counter-clockwise, and click right to rotate clockwise and then hold both to fire. This meant that you couldn’t move and shoot but the team went with it anyway.
The team comprised of said designer, myself, another coder, the audio guy from WPP, an artist and the other designer from WPP helping out with the art. The latter could only help out for a couple of hours as he had prior arrangements. With me joining the team, this freed the other coder up to do what he wanted: graphics. He had a shield asset that he’d made and wanted to port over for the mobile platforms. I was fine with that, I wanted to do the gameplay. We all worked pretty much flat out for the rest of the 24 hours. I was surprised how I wasn’t feeling that tired, although we were all definitely tired, when there were disagreements, the fuse was rather short. Fortunately there were no major fall outs. What we ended up with wasn’t completely what was originally planned. But taking into account the tremendously short amount of time available ( I decided I prefer the ol’ 48 hours) I was really pleased with what we created. It is actually really fun, and that’s without having the time to properly playtest and tweak. There were of course bugs, but the game was actually really close to a complete product. As such we’re planning on releasing it after doing a bit more work on it (alongside WPP I guess…) It shouldn’t require too much work to be done to it and we can then release it. If we manage to get both out then, combined with the Tiga, my honours, my (hopefully) masters and the experience I gain from my part time work, I would hope I’m quite employable.
So to conclude, because I guess this post is long enough to warrant it, I’ve been incredibly busy and stressed, but after a rather dark and gloomy 2 months, things are starting to look up. I hope that it continues that way
Summer 2013 recap (Dare Protoplay 2013 & Ludum Dare 27)
So I never carried on the dare game…
Which I’m a little disappointed in myself for. However I was working on average 10am-8pm Monday-Friday for 7 weeks. It was originally just a game to try to work on through to completion so that I have a game in my portfolio.
I never expected, going into it, to receive as much experience as I did. Not only did I get a good hands on with working in a team, but I was able to see (albeit perhaps on a smaller scale) the typical ups and downs of games development. I also never expected to get a chance to show off what I’d make nor how the experience would have an effect on me.
Having a regular schedule during the summer did wonders, not only for my sleeping pattern but also my health in general. It also made sure I was keeping up with my coding. Although we were working in unity so it was just scripts that I was writing. However, as gameplay programming is what I enjoy the most, this was fine for me.
Over the course of the first few weeks, there were a lot of back and forth discussions over the gameplay. In the end there were 5 stages of changes before a system emerged we could agree on. From this control mechanism, we thought of a feature to add that hadn’t previously been considered, which was to add in-game cut-scenes. The game we were making was a casual game for the tablet and mobile market. The control we settled on was a method where the player automatically follows a path but they can strafe along perpendicular to it. This allowed minimal controls but with more interesting movement. However, to make it easier and more intuitive for the player, the paths had to be quite straight. Therefore we decided to add cut-scenes that break up the straight paths and allow the players to admire the scenery more as well as give a better sense of flying, as the game features the player as a paper airplane. When beginning the implementation of this system, I realised the camera needed to have a dynamic zoom feature, not only to keep the player a decent size on screen, but also to make it look more action orientated. The camera, upon the player starting a cut-scene, flies to a predetermined pivot point. I later added in the ability to have multiple cut-scenes and this feature was able to be exploited to allow multiple pivots in a single cut-scene allowing for even more interesting shots.
The whole team worked hard and it was worth it in the end. We realised early on that we could apply to enter the Indie games festival which is part of Dare Protoplay (organised by the same people doing Dare to be Digital). We were pumped to enter, but it meant cutting our original schedule from 9 weeks of development to 7. But we went for it anyway, and I’m glad we did.
13,000 people attended Protoplay over the course of 4 days. Certainly it wasn’t that many who came to the indie fest as the it was poorly sign posted so not everyone was aware it was there!. However there were still thousands of people who attended our booth. I was delighted that not only was our booth almost always full, but also people were able to pick it up and play and almost everyone enjoyed it. Most of those who walked away only did so after playing the first level, which is intentionally simplistic as an introduction to the controls. It was mildly irritating however, they were drowned by the amount of compliments we received.
A selection of such compliments were:
- “Where is the box to vote for this game?” (there were boxes to vote for the 15 Dare to be Digital games, but not for the games at the indie fest)
- “This is the best game in here”
- “This is better than the games in the Dare competition”
- “This is the indie fest”
- “I need this game now!”
Receiving such a vast amount of positive feedback by so many people of all ages and backgrounds over the course of 4 days makes it nigh impossible to not have been glowing afterwards. As well as the public, we received high praise from people from various professions within the industry. Our game even got a mention in The Guardian (the game’s “Wee Paper Planes”)
I want to make games. There’s now no doubt left in my mind. The reality check that was drilled into me over the past couple of years regarding the vast negativity that surrounds parts of the industry had kept some hesitation as to whether this is truly what I want to do. But I’d never experienced what the positive aspect is like. You know, people actually playing your game and even better, enjoying playing it. That is all I could think about as I talked people through the game. I felt like a rockstar. A really geeky rockstar.
After the high from Protoplay had settled, I took part in Ludum Dare 27, The theme was “10 seconds”. I got off to a tiring start: On the starting day I had jumped out of a plane at over 11,000 ft. It was awesome, but it took it out of me.
During the theme voting rounds, the “10 seconds” theme had gained notorious popularity. It was very likely it would win. As such I’d started considering what game would suit such a theme.
I came up with an idea for a platformer where the player has to collect a gem within 10 seconds. After collecting it, the level will change (and increase in complexity and thus difficulty) and a new gem will spawn.
I quite liked this idea, but after skydiving, I really wanted to do a game based around that. I had the idea to make it so the player has to count 10 seconds after jumping before pulling open their parachute. The closer they get to 10 seconds, the higher the score. However, If they try to open the chute after 10 seconds, it won’t open.
I still liked my initial idea so decided it would be a good idea to really push myself. I decided to do a collection of mini games. I felt two would be too few so I aimed for at least three.
I started off creating a framework to run multiple games. It went really well and it wasn’t long before I was working on my first mini game, the skydiving. Although the concept is quite simple, it wasn’t as simple to code to make it work smoothly. However I still managed this within good time.
During the time I spent making the skydiver, I came up with another idea for a mini game. One where the player has 10 seconds to find the toilet. There would be multiple rooms where the player can’t see into them until they open the door. I’d randomise it so that not only was the toilet in a random room, but also the doors are orientated randomly. This was the next game I worked on because it sounded like it’d be fun.
It didn’t take long to get the mechanics up and running, I had over a day left - I was feeling confident. I spend an hour or so touching up the controls on both games as well as replacing the placeholders with more substantial sprites. I then checked the time, it was less than a day. losing that “+1 day” made it feel like I had considerably less time and I still had another game to make. A platformer no less. I’ve had trouble in the past when trying to get the basics for platformers to work, namely the character movement and the platforms themselves.
Unfortunately, I didn’t have a stroke of luck where I could get these mechanics to work. It took ages before I thought I’d managed to do it, but I discovered that it didn’t actually work when I made a more complex level. I was even more dismayed when I saw I had 3 hours left. I didn’t want to submit something unfinished. I had to come up with another game that I could build in less than 3 hours so I had time to double check the game.
I decided to adapt the original game idea with a different gameplay mechanic. Instead of a platformer, it would play more like Helicopter (and Jetpack Joyride) where the player is in the air and falls to the ground and must use a rocket to fly back upwards. They can also strafe left and right and using these controls must collect gems within 10 seconds. I did manage to get this in before submission, however I wanted to add in obstacles to make it more challenging but I just didn’t have time.
As with my Ludum Dare 23 submission, I will be posting my results for this submission on this blog. Click here to go to my Ludum Dare 27 game.
I wanted to do more this summer; while I have done some work on an OpenGL framework and also read up some more on C++, I didn’t do nearly as much as I planned for myself to do. However, being realistic, this was my best summer break yet for working and improving myself.
I’ve a few weeks left yet before I start the Masters course at Abertay, so I’ll continue to work on the other things I wanted to do. I hope I’m ready to take on the course and that I’ve got what it takes to make it. Certainly I’d rather know through the safety net of education rather than on the job where it could have a more damaging effect.
A blast from the past (June 2013)
This is a video of the pod-racing game that I made on the PlayStation2 in my second year at the University of Abertay Dundee. It uses a provided framework which allows sprites to be rendered.
Being a geek, I’ve always had a fondness of Star Wars. As terrible as the prequel films subjectively are; the pod-racing from Episode 1 was awesome, especially for my hyper-imaginative child mind at the time. This fondness has always carried through and when I was tasked with creating a pseudo 3D game, I didn’t have to think twice about the game I wanted to make.
Due to time constraints, this game didn’t stand a chance at rivalling the commercial pod-racing games. This game is basically a suped up drag race. To make it more interesting, there’s also obstacles in the way which racers dodge.
The obstacles also make up the lines of the track. In the video there is 150 obstacles being rendered. In the application I used box collision to check whether parts of the pod had collided with obstacles. The application has functionality for damage to each engine, therefore box collision is performed for each pod twice for every object. Obviously this was highly inefficient and made it run unplayably slow. So that the sprites were rendered correctly, I reordered them each frame. This was done separately for obstacles and players. Ordering the sprites meant I could check which sprites were ahead of the player being checked and disregard the later obstacles. Additionally, I only checked collisions with the obstacles and instead had a boundary for the sides of the track. Similarly, for collision between players, I only had to check the player directly behind and directly in front. These optimisations drastically improved the performance of the application. Further optimisation would be to use spherical collision instead as the y is never really taken into account and therefore the objects can be considered as spheres.
The framework provided allowed either a textured sprite (that uses magenta for transparency) or a coloured sprite with an alpha channel. In addition, the colour of each sprite could be changed for each corner and the sprites could be rotated. This functionality inspired me to implement some of the features that I’m most pleased with. This is mainly involved with the engines. I decided to implement an “overheat” function. This allowed the player to go even faster but doing so would heat up their engine. To show this, the engines begin to turn red. If the player loses an engine, they’ll dip. I also used the transparency and colour changing functionality to create the electrical beam that connects the podracer’s engines. The beam changes colours, direction and transparency at high speed to create the electrical effect and, as expected, disappears when an engine is lost.
The ability to alter the colour of the sprite also lead me to change my engine sprites from a set colour to grey. This not only meant that my overheating functionality would work, but also that I could introduce some basic customisation. The player can select what colour their ship is, as well as which ship to race.
As well as being able to rotate a sprite, sprites can be scaled, which is required for the pseudo 3D effect. I didn’t like the straight perspective created when simply scaling by distance as well as the fact objects came from the centre quickly then slowed down. I tried to fix this but, as can be seen, didn’t succeed. However the curving effect I had created grew on me and so I kept it in.
To help highlight the 3D aspect of the game, the pod consists of 3 sprites. This not only allows me to “destroy” the engines, but also when rendered with the 3D effect is quite convincing (until the NPC pods get towards the centre).
As it was a racer, NPC racers had to be created. I’d never done any AI before and that didn’t change for this application, however the basic “AI” I did implement worked as well as needed for a demo. Had I more time, I would have tweaked it to be a little “smarter”. Currently, there are different levels of difficulty which determine the characteristics of the AI. The basic idea behind the AI implementation is that if the NPC sees a player or obstacle ahead of it, it shifts to the side. The higher difficulty AI are able to see further ahead as well as move a larger distance, which is why they tend to perform better. Additionally their speeds change based on difficulty. This can lead to some difficulties leading NPCs to suicide as they overheat their engines. Although on the final difficulty, the NPCs race at the highest speed possible before overheating ensues. Which means they’re formidable.
Amazingly I was still avoiding vectors when creating this game. Therefore, there’s no loss in forward momentum when the players move sideways. This is obviously something else I would improve if I was to go back to this, as it would improve the integrity of the race. How much the player can shift left and right is dependant on their engines; if they lose one, then they’re limited on how far they can shift.
One thing racers need is a GUI to tell them what they want to know. I had fun creating my HUD; slowly adding more functionality until I had to stop due to lack of screen space!
The most obvious thing to first include was the position the player is in. Next the player also wants to know how many he’s racing against (being in 1st place is tainted when you’re the only player) and I’m quite proud of the fact that my game was able to run with all those 150 objects plus the collision and AI for 20 podracers. Another thing to include is the time. When I first created this, I wasn’t as familiar with timing code. I managed to get functionality to count in seconds. To add more “precision” I cheekily add a random number at the end, the application runs so fast that it appears to be correct! Though that does mean you could technically beat your previous best by less than a second but be screwed over.
The next addition is how far the player’s travelled and how fast. Specific to my game, I also included the conditions of both engines. This also took advantage of the ability to create transparent sprites as well as colour them. The engines start off at 100% (green) and as the engines overheat the condition decreases (to red) and then expands again when cooling. I also implemented functionality that mean that as the engines are overheated, their optimum condition also decreases, so that they never return to 100%. Additionally, non-fatal collisions with obstacles and players damage the engines. This opens itself up to pick-ups that can fix your engines, however I sadly didn’t have the time. When an engine is destroyed, the bar’s replaced with a “destroyed” message.
As the players are ordered each frame for rendering, I realised I could take advantage of this and display the order as it corresponds to the positions in the race. I created a feature that begins with arbitrary names and then if there’s more players, creates names for them. I then made it so that the player is highlighted in green and, even cooler, racers who have been destroyed are highlighted red. This also helped highlight when testing, why sometimes racers never seemed to catch up! Another feature I’d wished to have implemented was a “map” to help the player visualize how well they are doing and how close they are to the end.
As well as allowing the player to change the ship, I also created 3 different “levels” which is more of an aesthetic alteration rather than a functional one. Personally, I love the water level.
Sadly, the recording didn’t take the audio, although this was limited. There was only two audio sounds: the engines and the explosions. The engine sound changes volume level based on the speed the pod is going. Similarly the explosion sound’s volume is adjusted based on the distance from the player. The linux Playstation2 development kit we used only has 2 audio channels available. To allow for multiple sounds at the same time, I combined the audio for both sounds so that they appear to play correctly. Although this wasn’t perfect and sometimes the a sound was dropped however it still allowed for a more dynamic soundscape.
Finally, I also included a save file where the game reads from/writes to a text file containing the best finishing time. This adds slight more replayability to trying to improve a personal score. Although the player would soon realise the one currently set is bloody difficult to beat. To do so requires taking advantage of the fact that if you pass over the line you complete the race: even if you’ve lost both engines. Therefore at the last bit you want to got all out and blast across the finish.
Dare to do it anyway (May 2013)
So I’ve finished my undergraduate course at Abertay University.
Unfortunately I didn’t get into Dare To Be Digital. For the second time. As to be expected, some of the feedback was arguable and additionally I’m rather surprised at some of the teams that got through over what I, a heavy pessimistic, believe to be a brilliant game idea, especially for what’s required of the competition.
However, not being tied to developing the prototype for 9 weeks opens me up to work on multiple projects to improve my skills. But as it was a decent idea and one that would improve my knowledge of the, brilliant, Unity engine; I’ll still be working with my Dare team’s designer to make the prototype.
After two days, I’ve already made quite a large amount of groundwork. Which is certainly down to the ease of use presented by Unity.
It will be interesting to see what bumps I hit along the way in development, but so far every bump’s been a simple solution provided by Unity. For example, the camera class is very easy and intuitive to use. The game will have local multiplayer (as every game should). So there is a need for dynamic splitscreen. This is a simple case of altering the viewports of the cameras. Additionally, the input tags are useful; for instance, I can have input for moving horizontally like this:
Nothing overly pretty to show yet however, simply grey cylinders that you can move around in their respective screens. Though I’ve only a single plug in controller, so have included alternate movement which if there was to be 4 players, the keyboard would be more crowded than an orgy in an elevator.
Fiery fate (May 2013)
It seems to be the case that, the closer to a deadline, the more work I manage to produce. It’s not a pressure thing, I just seem to manage to produce more work as the deadline approaches.
This is problematic when trying to write up on what I’ve produced when I keep on producing. Still, what I have produced has certainly sexed up my application.
The first thing I managed to do was implement a skybox to fill out the dull darkness. Had I been able to do implement some of the technologies earlier on, and had more time to implement more pretty shit, I would have had procedurally positioned solar systems, which is similar to how the skybox images were created.
The second thing I managed was to implement more complex lighting, based on the vertex normal. The issue I had with this was I didn’t know where the neighbouring vertices are. But I then realised I could “guess”.
I basically create two more vertex positions perpendicular to the original vertex, and work out their positions on the planet. From this I can calculate the normal for the main vertex.
While this creates much more realistic terrain, basing it on the vertex normal (as opposed to in the pixel shader) creates some artefacts, including emphasising the patch seams
I simply haven’t the time to fix these issues; if I calculated the normal based on pixel position, the vertex position wouldn’t hinder the shading. However, to do this, I’d need to pass in the cube’s world matrix. This would be maddening to do for every pixel, instead I’d have to use a constant buffer. However, to do this I’d need to allocate aligned memory.
While my land looks better than the totally flat appearance when rendered with spherical normals, it’s now more noticeably unearth-like, it looks really rounded.
So I decided to try out different fractal terrain techniques to try and find a better terrain.
The first was a heterogeneous multifractal. This made the terrain more jagged inland, and smoothed out to the end. However, it needed to be lowered as the coast was too smooth. similarly, the mountains are too jagged.
The next was a ridged multifractal (the same as used for creating the nebulas in the skybox). This made more mountain-like structures, however, the land looks more like sand dunes than rock and the coast line doesn’t look correct.
I decided instead to use the original fBm but with some form of ridges. They’re not quite mountains, as they run at the same height, but they do look like mountain ranges and less like sand dunes.
I’m not 100% happy with it, however I really need to crack on with my dissertation. Although not before I implement better clipping techniques!
For I can’t really do my results section when my application doesn’t run as well as it should do.
Plan ‘Merging’ together (April 2013)
While I knew in my head I’d be able to come up with a simple solution to merging the patches, I didn’t imagine I’d be able to solve it with such little effort, and so soon after fixing my noise issues too!
The cracks aren’t 100% gone. But they are drastically reduced, enough for this project.
I wanted to take advantage of DirectX 11’s tessellation stage’s feature where you can assign a different amount of detail for a tessellated patch’s edges to the centre detail.
So I knew it should be possible to make the edges of my level of detail seamless.
My first attempt was where I acknowledged that each level of detail essentially doubles the edge detail. So if an edge had 32 vertices, the next level of detail would make the edge have 64 vertices. As the two neighbouring patches aren’t connected, these edges are opened up to cracks. So I simply came up with an algorithm that halved the outer edges of the child patches.
However, this only solved the issue for the “best case scenario” and I’d overlooked the typical scenario. As shown in a previous post, there were moments where a patch may have neighbouring patches of different levels of detail. As each edge can only have a single value of detail, I knew it was down to the child patches to make the edges seamless.
Of course, merging LOD is a solved issue, however I wished to come up with a less complex method that worked with my current set up. I wanted to try to do it without having to traverse my quadtree structure more than once, as that would considerably slow down my application.
Fortunately, I managed to do just that.
As the picture shows, the edges of each patch merge seamlessly into the surrounding patches.
However, this technique is limited. To conserve the amount of vertices on an edge means that eventually a patch edge will have a detail of 1 (the smallest possible) and so further levels of detail with result in the possibility of cracks. However, they’re drastically reduced in size and much rarer.
To maximize the guaranteed crack free levels of detail, the edge is set to have the maximum number of vertices (64) which means that there can be up to 7 seamless levels of detail.
These next pictures aim to give a sense of the difference this merging technique makes
without the method:
with the method:
The only technical thing really left to do is implement a clipping technique that reduces the amount of patches rendered, namely frustum culling.
Noisy excitement (April 2013)
My noise issue is solved.
Since it’s first implementation into my honours project, the noise hasn’t worked. I wasn’t sure why, as the implementation had worked in my previous procedural project. However, it wasn’t a large issue at first, as the other features had to implemented.
However the cause of the problem lingered. Recently however, I went over my 3rd year project and realised that while the shader was the same, the implementation in fact wasn’t. That project used the effects framework, while I’m using DirectX’s API. I then wondered about the usage of globals in the 5.0 shader model.
Sure enough, you cannot simply declare a variable in global scope, it must be static. However I knew before even doing this that it would be hazardous for performance. However, to check whether this was the issue I proceeded with using a static array for the hash values of my noise.
Sure enough, success:
However I now needed to find a way of having a static array to read from that isn’t in the global scope. The two obvious solutions are either using a constant buffer or a texture buffer. A constant buffer is the option I went with because, besides being easier to implement, constant buffers are also optimised for GPU resource updating, as the buffer will only be updated when ever the noise permutation is generated (typically once) it should run rather nice.
And run rather nice it did.
To my surprise, it actually runs better than it had been doing! Moving the permutation array out of the shader and into my application also allowed me to have some fun with generating different Earth-like planets.
I imagined that when I recreated the array, it would slow down the application. Again to my surprise, it regenerates at an exceptionally fast pace. As shown in my video
Fixing the noise leaves only one more glaring issue: Those pesky cracks.
I attempted to alleviate them with a method I came up with. But I soon realised that this method was only working for specific patches and didn’t take into account that there are other neighbouring patches which may have differing levels of detail. As shown in this image:
Where the green boxes are where there are no possible cracks, but the red boxes are where cracks can, and bloody well do, appear.
It’s more than possible to remove the cracks, I know how I could do it. However, I wish to come up with a smarter method which sorts it and still runs smoothly.
No shortcuts (April 2013)
As my planet was struggling to render in real-time I decided it was time to look into using a texture to relieve some of the computational stress in calculating the fractal Brownian motion.
I knew going into creating the texture that there would be limitations:
- The texture can’t be too large as memory is limited
- When using the texture as a basis, rounding errors will be immediately prominent as no more detail can retrieved.
However, before I even got to these issues, I hit implementation problems. Firstly I had to re-familiarise myself with DirectX11’s API for textures. Which took a good while; in it’s attempt to be diverse and efficient it’s really convoluted in what you need to set up.
It’s then not clear exactly how you fill a texture. I first shot at creating a 1080x1080 texture. This was very quickly shot down by the compiler when it informed me I’d occurred a stack overflow. So I decided to start off with a 256x256 texture.
I then ran into problems when sampling the texture: it only appeared to sample the first row of the texture. I soon managed to get it to render more of the texture, however it was clear that it still wasn’t completely correct - it didn’t appear to render the entire texture, just a section. Tests on this were inconclusive in finding out exactly what was being rendered.
These issues, and the likely hood that it would be more of a hassle to fix them and implement the textures as desired, combined with the inevitable problems that would have come after led me to the decision to revert back to calculating the fBm in the shader and instead focusing on trying to get it to run better.
One reason for it running slow is that patches are being rendered even if they’re not viewable, I’ve yet to make them clip. This was particularly a problem when my planet was relatively small as the quadtrees were splitting up on the otherside of the sphere. So when I drastically enlarged my planet, it actually managed to run smoother.
I’ve also managed to implement the distance based LOD for the fBm, where less octaves are used when far away. This makes the transition a lot smoother than when implemented through the vertices.
I’ve also improved the aesthetics in preparation for comparative images in my study. I’m still unable to get the correct transformed normals for the planet so the lighting is currently using incorrect normals. However, because I had to pass in a matrix for a constant buffer to be relevant, I’ve not been able to take advantage of it to pass in data not only for the tessellation patch detail but also the camera position and it’s look vector. Which was used for the fBm LOD and specular lighting. I use more detail in the pixel shader as it’s possible to feign more land mass.
This also makes the planet a little tidier when getting near the coasts and even allows for smaller islands to appear that wouldn’t have if only based off the vertices.
Here is a shot showing the improved lighting:
I now can record my application straight off the computer (so no more unsteady hands)
I had issue compressing the video however and Tumblr didn’t really help. So I’ll be posting my videos onto vimeo for the time being
This video shows how my application converts my cube into a sphere
Onto bigger and better things (March 2013)
So I’m back to my cube to spheres but this time with a new look:
As expected, the LOD doesn’t run fully when approaching the “corners” of the sphere, to the point where detail actually decreases as you reach them.
The next step to rectify this is to base it now on the patch’s new spherical position.
This still won’t be 100% correct as the elevated vertices will still be based on their flat patches, so if the camera is flying over the top of the mountains, the LOD won’t be as high as if the camera was flying through the mountains.
Another issue that’s arisen, though one which had occurred with my split face implementation, is that the higher LOD drop the frame rate dramatically.
As I expected, dropping the detail of each patch didn’t fix the issue, which confirmed my suspicion that the issue instead lies within the fractal calculations as each patch is recalculating the fBm. This obviously is inefficient and so I will have to work out a way of calculating the fBm for the object which the shader can then use.
The most obvious solution is to pre calculate the fBm on the CPU into a buffer which the shader can access. However, this would create a catch22 situation between the limited detail based on the size of the buffer as well as how much memory the buffer would take up, as the project is for multiple planetary bodies.