2016 Spring Game Project (In Progress) – The Adventures of Mark

Screen Shot 2016-02-10 at 11.51.36 PM

I’m currently working on a 2D Java game with inspiration coming from games like Final Fantasy and Undertale called The Adventures of Mark. It features a typical too-tired-for-this-shit college pizza delivery boy who must embark on an adventure down the block to earn his pay. Will his Tolerance and Brain Power hold true? Find out, nightly, on the GitHub repository featuring this project located here. This is definitely one of the larger games I’ve worked on and is filling me with more and more determination each day.

Screen Shot 2016-02-10 at 11.57.54 PM

Current state of the battle system. All of the assets (except the fonts) are drawn digitally and animated by myself.

I’m collaborating on the game with a friend of mine, Jackson Yeager (his portfolio). Although Jackson may have more knowledge than myself when it comes to various data structures, he has less experience with game making. Regardless, I’m sure both of us will learn something from this project, and I would definitely recommend you take a look at it if you’re at all interested.

I’ll be updating this post/making new ones as the game progresses.

Advertisements

2015 Winter Cryptography Project – P / NP, The Clique Problem

I recently read a book titled The Golden Ticket by Lance Fortnow in which the P / NP Problem and its importance to the theory of Computer Science are heavily discussed. Explained briefly, the P / NP Problem asks if every single problem can be solved quickly and efficiently. In relations to cryptography, P / NP is crucial because it allows for the creation of hard math problems to, for example, be able to hide private information on a public network. One of the most famous P / NP Problems, the Clique problem, was discussed and is, in my opinion, one of the more intriguing problems in the book.

Take, for example, a society called Frenemy, in which every citizen either has a 50% chance to be friends or a a 50% chance to be an enemy with a person they just met. In Fortnow’s book, he details how it would be virtually impossible to find all cliques of varying sizes (difficulty/time to find all cliques increases as size of desired clique increase) inside of this society (A clique is a group of people in which everyone is friends with all other members). At first, I didn’t think it would be that difficult to find the largest clique, but after making a simulation for myself, I was proven otherwise.

I stayed true to the Frenemy rules in my simulation – every time someone meets someone else, they have a 50% chance to either be their friend or their enemy. I experimented with varying population sizes as well as varying amounts of citizens in which each person can meet. I made two methods in order to experiment, one in which everyone meets everyone else in the society, and one in which everyone meets everyone on their own street (9 total streets in which 1/9 of the total population lives on each given street). To make the data easier to set up and understand, each citizen is given an identification number as well as an ‘address’, the number of street they live on. I aimed to find every single clique of size 3 in the society, and to see for myself how difficult it became to find them as population size increased.

Screen Shot 2016-01-13 at 1.49.10 PM

A couple of cliques present in a population size of 100 in which citizens met everyone on their own street.

I found that the total time to generate the population and find cliques of size 3 took much much longer the higher the population size was, which makes sense in hindsight. Adding a single citizen to a population size of 20 where everyone meets everybody doesn’t have nearly the impact as adding a citizen to a population size of 10,000,  as the number of updates a computer has to make to each citizen’s list of friends and enemies for a population of 10,000 is much larger. When I was reading Fortnow’s book, I didn’t put much thought into the rate at which the updates change as population size grows. I naively thought growth was generally linear, allowing total computations to find all cliques of size 3 to be relatively small compared to the actual value. Only after I made my own simulation did I truly understand the difficulty surrounding finding a solution to the Clique problem. If you find find yourself with some free time, I would recommend making a simple simulation like this for yourself so you can see the results firsthand.

I’d like to thank Lance Fortnow for pushing me to explore this problem through his book, as all P / NP problems (not just the Clique problem) are profoundly interesting in their own rights. And, if you’re planning to look more into the P / NP problem, Lance Fortnow’s The Golden Ticket is a must read. The discussion as well as the existence of the P / NP Problem has really expanded my understanding of Computer Science and the importance of not only the physical act of coding, but the theory behind CS as well.

2015 Winter AI/Arduino Project – Brainy Crawler

Having taken a general engineering course my senior year of High school at the same time I was researching Neural Networks (read more about Neural Networks here), I decided it was time to make a physical agent – a tangible robot controlled by a Neural Network. My background in robotics helped in this venture as well, as I knew before hand how servos and ultrasonic sensors worked.

Before getting to the hard part of writing the code to control the robot, I had to make the agent itself first. I knew I wanted to make a robot that learned how to move, but I didn’t have 2 motors. Instead, I decided to use servos, as it not only allows movement, but the motion itself is complex and warrants a fair amount of learning in order to master it. For parts, I used an Arduino Uno with 2 Parallax standard servos and 1 HC-SR04 Ultrasonic sensor. A friend of mine, Clay Busbey, helped out by 3D printing parts for me. I had 3 parts printed, the main chassis and 2 essentially straight bars used as 1 arm with a joint. I put the Arduino Uno and a half+ breadboard onto the chassis and hooked everything up from there.

crawler_png

Fritzing diagram of how I hooked up all of the components. It doesn’t show the bars used for arms, but you’ll see how that works in the next gif. The red and green LEDs indicate whether the Arduino is switching between Networks, red meaning it’s switching, and green meaning movements are controlled by a Neural Network.

In terms of the Neural Network, I used 3 input nodes, 1 hidden layer of 10 nodes, and 2 output nodes. The input values are the current position of each servo, along with the current distance away from the object detected in the ultrasonic sensor. The output values decide whether the servo should increase or decrease its current angle (> or < 0.5). I used a sigmoid function (1 / (1 + e^(-x))) as my activation function. Each Neural Network controls one motion, with a genetic algorithm that replaces the least successful Network (2 best networks make a crossover/mutation child that replaces worst network). Fitness is distributed based on how far the robot moved while under control of each Neural Network.

brainy crawler

Timelapse of the agent moving across the floor (sped up 3x). You can see that although it has a good start, it hiccups due to either a bad breed or mutation.

I enjoyed working on this project. It’s really one thing to watch a program learn in the real world as opposed to in the virtual world.

If you want to get the Arduino script and/or the .stl files for 3D printing the parts, you can get them here (.zip, 15 kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqdGdGUS1YMk9oN1E/view?usp=sharing

2015 Winter AI Project – Evolution Simulator

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

Wanting to explore Neural Networks deeper (read more about Neural Networks here), I decided to make an evolution simulator. The aim of the project was to create a population of creatures that, starting with knowing nothing about their environment, evolved to chase after food and run away from predators. The creatures can also evolve different values of attack and defense, of which determines if that creature can eat other creatures. A set of 12 creatures is allowed to survive for as long as it can (must eat food to replenish a diminishing energy value, if it gets to 0 the creature dies), and at the end of the generation, the bottom half is eliminated, the top half reproduces sexually once (2 children) with each other (first mates with second, third mates with fourth, and so on), and asexually once as well. Reproduction is accomplished with crossover/mutation, that takes into account the weights for each creatures Neural Network and the attack value for each creature. Fitness is distributed based on how many times the creature eats.

The Neural Network itself was fun to make. I decided I wanted to make an adaptable system, that is, one in which I can change whenever I want it to. This includes the size of each individual layer and the total amount of hidden layers. For this project, I got the most success using one and two hidden layers with different amount of nodes for each (18 nodes for 1 layer, 5 nodes for 2 layers).

The input for each creature is as follows: 2 ‘eyes’ that take up 4 input nodes, as well as one node expressing the current angle the creature is facing. Each eye detects the angle from the creature to the closest entity (or second closest, for the second eye) and the r, g, and b values of that entity, expressed as a fraction over 255 (the range a color value can have is [0, 255]), allowing the creature to evolve decision making based on whether the entity it sees is edible or not. The output nodes control whether it turns left or right, and its velocity forwards or backwards (move more slowly while going backwards). I used a sigmoid function (1 / (1 + e^(-x))) as my activation function.

Screen Shot 2016-01-05 at 10.12.19 AM

Neural Network that has 9 input nodes, 1 hidden layer with 18 nodes, and 2 output nodes. The color for each weight is explained below.

Before I go over examples of the simulation, I’ll explain what each of the graphics in the frame represent. The creatures are represented by shades of purple (creatures that have a high defense are more blue while creatures that have a high attack are more red), with the current best creature (highest fitness) is outlined in yellow. The line extending from the center from each creature to its outline is the direction in which the creature is currently facing. Food is represented by the green squares. Moving onto the panels outside the main frame, the one in the top right shows the current generation, the amount of creatures alive (from a set max population size of 12), the best fitness value from the overall simulation, the best fitness value from the previous generation, and the best fitness for the current generation (represented with O, LR, and C respectively). The graphic below that panel displays the Neural Network of the creature that garnered the best fitness, where connections that are mostly red are between (0, 1), and connections that are mostly blue are between (-1, 0). Finally, the graph displayed on the bottom of the frame illustrates the best fitness for each round and shows overall growth of the population.

beginning

At the beginning, you can observe how most creatures are spinning out of control and lack in objective.

structured

After some time, you can see that although the creatures have not evolved the ability to go directly towards food, they have formed some structure in the way they collect it. You can also see that this population has evolved to contain creatures that have a high defense, as creatures that had a high attack in previous generations did not perform as well.

end usual

The usual ending. Creatures have evolved to not only go towards food, but to distinguish food from other creatures. That means that although an entity might be closer and thus more efficient to go towards, the creatures recognizes that entity as another creature and decides to go towards an entity that is further away because it recognizes that as food.

ending weird

Although creatures tend to evolve the ability to go straight for food, there are other strategies that have the possibility to emerge. For instance, in the simulation above, the creatures have come up with the strategy to propel themselves vertically, turn horizontally, and propel themselves the left or right whenever they detect food.

This project was incredibly fun to make. Although I had some frustration at the start while learning how to make Neural Networks of varying size, it  was fun finding solutions to my problems. I will probably keep working on this project for weeks or months to come, and I will definitely be utilizing my adaptive Neural Network on future projects.

If you want the source code, you can get it here (Java, 35 kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqaXV0TVp0aXhIcTA/view?usp=sharing

2015 Winter AI Project – Wall Avoider

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

During the holiday break, I decided to dedicate my programming efforts towards learning Neural Networks. I knew a little bit about them before hand, like the architecture and the theory of how they worked, but had no idea how to implement them into my own projects. The wall avoider project was dedicated towards making a very, very simple Neural Network that progressively got better at avoiding walls using a genetic algorithm.

But first, I will briefly explain what Neural Networks are. In short, they are systems modeled after the human brain – that is, they use a set of ‘neurons and synapses’ that manipulate a given set of input data so that a desired output is reached. The issue is, however, that we do not know the correct weights and values necessary so that the network will give us our desired output. There are several ways of determining error/finding the best set of values so that a desired output is reached. For this project, I’ll be using a genetic algorithm to essentially determine the best network, and ‘breed’ it with other good networks, resulting in an equally if not better Neural Network. This occurs by choosing 2 Neural Networks that result in a long run (doesn’t hit the wall) and randomly choosing weights from both networks to result in a new Neural Network.

neural-network

Anatomy of a Neural Network.

My project’s Neural Network has 3 input nodes and 1 output node. The value of the input nodes is either a 1 or a 0, depending on whether or not its respective ‘eye’ (line extending from the center of the runner) hits a wall or not. These values are each multiplied by their respective weights (random at first, then altered through the genetic algorithm) and summed together, then put through an activation function. In my example I used a variation of a sigmoid function (1 / (1 + e^(-x)) as an activation function. Sigmoid functions are nice because their ranges are typically (0,1), resulting in either an ‘on’ or an ‘off’ setting. If the a(x) > 0.5, it turned right. If a(x) < 0.5, it turned left. If a(x) = 0.5, it didn’t turn at all.

Screen Shot 2015-12-23 at 12.47.08 PM

Graphic of my Neural Network. i = input (eyes), W = weights, a = activation

After constructing this system, I began running tests. As the objective wasn’t a hard one to learn, the networks involved mastered the challenge quickly. In the gifs, you can see on the left the trial number and the fitness value (how well the network is doing) for each member of the population. p# = parent, c = child, and ^ indicates current best network.

trial 1

Trial 1, struggling.

trial 7

Trial 7, a pretty decent network is bred, but still room for improvement.

trial 25

Trial 25, all networks are doing very well.

This was a great intro to Neural Networks. I definitely want to take this idea further and create deeper, more complex networks for harder tasks.

If you want the source code, you can get it here (Java, 26kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqSVRvbXpzcm1VRVE/view?usp=sharing

2015 Winter AI Project – Maze Runner

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

After my Chess project, I wanted to become a little more serious in my research surrounding artificial intelligence. I looked up everything from examples of how it was used in day to day life to videos of lectures by college professors. I learned a lot about the concept of reinforcement learning through these lectures and even watching college Thesis projects, making me want to program something to test out the idea.

First, however, a very brief overview of my current understanding of reinforcement learning. Reinforcement learning is a system in which the AI learns about the environment it’s put in through a system of actions and states. The ai is rewarded or punished through the various states (depending on whether you want to maximize utility or minimize cost, either way is fine, but for my example I will be using a reward system) for good actions versus bad actions, but the AI doesn’t know which actions are good or bad until it experiences them. Thus, after a long time, the AI should begin to gravitate towards actions that garner it the largest sum reward.

Alright, now to my project. I decided to go with something simple and intuitive to explore the concept of reinforcement learning through creating an artificial intelligence that would find the quickest path to an exit in a predefined maze. As mentioned before, the AI started with knowing absolutely nothing about the maze it was placed in, and must learn everything from scratch.

Screen Shot 2015-12-18 at 2.09.06 PM

Original board displaying reward values for each tile (currently 0 because the AI doesn’t know which tiles are good and bad yet). The blue tile is the current position of the runner (controlled by the AI) and the green tile is the exit.

The reward system for the AI essentially takes in all of the rewards of the tiles it visited and sums it together, divides that number by the amount of tiles visited, and uses that value to update the reward value for all of the tiles it visited. The tiles takes that value and averages it into all of its previous rewards given from previous trials. The AI gets a heavy reward boost for completing the maze faster than before, and a heavy punishment if it was slower (increase/decrease overall reward by a factor of 2). Thus, over time, the AI slowly becomes better and better at solving the maze.

10

The 10th trial. You can see the general path the AI takes (tiles with the highest reward) is rather sporadic.

50

The 50th trial. The AI has begun to iron out all the kinks in the path, but it isn’t yet the most efficient.

100

The 100th trial. You can see the AI has almost found the most efficient path.

I heavily enjoyed this project, as watching something learn, especially a program, is really interesting. I’m really excited to see where I can take this in the future.

If you want to play around with this, you can get the source code here (Java, 24kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqNUVpTFFQdUNaaUk/view?usp=sharing

2015 Winter Game/AI Project – Chess

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

I had been wanting to make chess for a while, as I’ve been a fan of the sport. What better way to make it than to explore simple self learning artificial intelligence at the same time? However, before getting to the AI, I first had to make the game. The idea of generating a list of possible moves was was simple enough, but the real challenge was implementing more complex moves like castling, checks, stalemates, etc.

Screen Shot 2015-12-11 at 6.49.45 PM

Final game showing all possible moves.

After spending a while messing around with the game itself, time came to begin working on artificial intelligence. Having known nothing about the subject (I started learning about concepts like neural networks and genetic algorithms shortly after), I decided on a simple concept. The artificial intelligence would save all the games it had previously played and look back on them, determine the statistical probability that it would win if it did that move (based on the outcome of the games that had the same move), and decide whether or not to proceed from there. If it decided that the move was bad, it would instead do a random move and add that to the database.

There were a few errors in my method, however, that presented themselves as I neared the end of writing the program. Mainly, it would take ages before the AI became even decent at the game, because, no matter how fast it played the game, it would almost always play a new game every single iteration (Hardy’s estimate of possible chess games is roundabout 10^(10^(50)), more on the subject here), causing the file that contained all of the previous games it had played to become unbelievably massive before it became good at the game.

Screen Shot 2015-12-11 at 8.26.02 PM

A fraction of a single game stored in a .txt file.

Despite all short comings, this project was still a simple and a fun introduction into the world of self-learning artificial intelligence.

2015 Fall Game Project – Ludum Dare 33

The Ludum Dare Game Creation competition, if you’re unfamiliar with it, is a competition in which entrants are given a central theme and are required to make an entire game – code, art, sound, etc. – in two days.

The theme of the 33rd Ludum Dare competition was ‘You are the Monster’. Other games I first thought of were games similar to that of Rampage for the Nintendo Gamecube, a game where you are a giant monster destroying cities and causing chaos just for the hell of it. Thus, Colossus Guard was born.

Screen Shot 2015-12-10 at 12.12.52 PM

Main menu.

I took the idea from Rampage and combined it with an endless runner; the player can only move right, but can destroy houses and step on civilians as they run away.

Screen Shot 2015-12-10 at 12.13.32 PM

Final product.

It was the first game in which I implemented more complex animations, as well as any sort of sound in my games. I expected that process to take a very long time, but was pleased to find that it was shorter than I had expected. This game is also the first game in which I used a delta time to try and increase frame rate, of which increased playability on other machines.

I very heavily enjoyed this process and learned a surprising amount in those two days.

The source code can be found here (Java, 201mb): https://drive.google.com/file/d/0Bz_0wgRmDpKqckpUTnNXVFdTclU/view?usp=sharing

 

2015 Fall Cryptography Project – Password Generator

During the early part of 2015 I was fairly interested in cryptography. In particular, I was interested in penetration testing of all sorts – accounts, WEP and WPA wifi password cracking, etc. Of course, I was only cracking  accounts I owned.

The pinnacle of my interest with pen testing came when I decided to make a password cracker. For example, say my password is C0d3BR3ND4N!. You enter a few words that you think your target would include in their password, as well as a few parameters, and let it go. The magical components for my password would simply be brendan and code, input with the parameters –H (includes capital permutations as well as numbers substituted for letters), –P !#(adds ! and numbers 0-9 before and after each possible password), and -w (writes to file), and after a few seconds, the program would find my password with relative ease.

Screen Shot 2015-12-12 at 12.45.11 PM

Portion of the words generated.

It essentially mashes all the words it was given together, finds all of the capital permutations and all forms of numbers switched with letters, etc., then adds it to the file.

Screen Shot 2015-12-12 at 12.41.21 PM

Terminal display.

Because this program can crack passwords, I’ll refrain from posting the source code. You can, however, access the .txt file I generated for the purposes of this post here: https://drive.google.com/file/d/0Bz_0wgRmDpKqNHFtU2pLZV96SDg/view?usp=sharing

2015 Summer Game Project – Fast Paced Rogue-Like

In an attempt to gain familiarity with different approaches and methods towards game design and implementing images/text files into projects, I created several short games over the summer that were fast, fun, and a learning experience.

After spending a while playing games like FTL, the Binding of Isaac, and even watching twitch.tv streams like FourBitFriday’s Catacomb Kids really interested me in the realm of rogue like games. This time, I wanted to make a game that was as organized as I could make it while also advancing my skills in animation and level generation.

In my previous games, animating had been a real challenge because I had never implemented something solely for the purpose of animation. With this game, however, I decided to simplify it so that it wouldn’t be such a pain to implement every time I wanted a new entity. This determination propelled me to learn as much as I could about the animation process and sprite sheets as a whole, and in the end I found a pretty simple way of handling animation.

Screen Shot 2015-12-09 at 4.24.22 PM

Final product

I also spent a long time figuring out how to add smooth camera controls. After playing the game with a rigid camera, i knew it was time to make it smoother in order to encourage a faster pace throughout the game. So I learned a lot about how cameras and perspectives can interact with the current view point, knowledge that would become invaluable to me in future projects.

Screen Shot 2015-12-09 at 4.25.54 PM

Close-up of an enemy mob in the game

Lastly, I learned a lot about level creation. I had never made something that could load levels before, but I wanted to try and make a system for that specifically for this game. I decided to use a simple program like MSPaint to draw out the levels using colors to represent the various blocks, load them in, and connect them room by room. This proved more of a challenge than I had originally thought, as I did not anticipate the difficulty that came with reading RGB values of individual pixels. However, I pushed through by learning not only how to look at each individual pixel, but also the notation for RGB as well.

Screen Shot 2015-12-09 at 4.28.45 PM

An example of what a level looks like in MSPaint. Gray/Black = walls, Blue = enemy spawn points, etc.

This project was a fun one. I feel it really enhanced my organization skills when it comes to code as a whole, as well as strengthened my knowledge on how each individual part of a project comes together in the end to make one single product.

You can find the source code here (Java, 742kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqTXNCSjFEU0V3X2M/view?usp=sharing