Revisiting and Revising

This week entailed finishing off the final puzzles of Muesem Multiverse and playtesting the game with people new and old to VR. I finished off my hiding locker scene with the context of this project. I made this scene earlier in the Launch Pad program but the project that this scene belonged to actually broke which meant this scene was lost until I recreated it this week. I think this scene is important to convey the switches between 1st and 3rd person view within some parts of the game. I also got feedback on one of the puzzles I have been working and of course some people have problems with it. I took the feedback a fault on my puzzle design. It is my job as the designer to create a fun understanding experience for most so I did a couple of things to improve the puzzle.

1 I gave the player more feedback when they are I the right track.

Screen Shot 2017-09-04 at 1.35.27 AM.png

2 I provided clues in the environment

Screen Shot 2017-09-04 at 1.25.15 AM.png

3 Giving a awesome reward once the player finishes the puzzle.

Screen Shot 2017-09-04 at 1.31.55 AM.png

I think after these changes players will have a better experience in one of the first puzzles of the game. I will be working the rest of the week on deploying the app and testing it through the store.


Entering The Multiverse


Let me start by saying there is still much work to do on Museum Multiverse, but it is coming along. This week the team has been working on post processing effects in order to create a cinematic warping effect when entering paintings in the museum. Unity has an awesome new system for post processing effects but they are not compatible with Android. Our solution is to work with legacy image effects in order to make the scene look great on the Gear VR.

I have also been working on the notion of incorporating 2D gameplay into VR. I have created a pretty good proof of concept and have now added that portion into the game. I want Museum Multiverse to be a departure from the normal VR experience on the Gear VR store and I think this section will be a refreshingly fun experience for players.

    giphy (4).gif

We also added a new member to our ranks of Museum Multiverse, Mikei Huang, a talented VR and Visual Designer. His work portfolio includes cool VR projects like Kuru Kuru Sushi VR and Back Seat Baby. He has been working with me on the cover art and creating visual consistency in Museum Multiverse. I am very happy to have such a talented member of the New York City gaming community on my team.

We also completed the models of the main character(s) for the game. Up to this point we have been using a simple cubed character as placeholder for most of development but it will be good to finally switch him for the main character. We will miss Mr. Cubes but we are happy to have our character so close to being finalized. The Character Modeler and Animator, Ethanis a talented artist with works in many visually stunning titles. Checkout his twitch channel where he works on projects live and his amazing GDC talk on low ploy development. We’re excited to have his work in Museum Multiverse.

CharactersMM.jpg CharacterWSecurity.jpg

Our next steps on the roadmap are to connect all scenes scene together and playtest playtest, playtest – and then more playtesting. The more we learn about how players organically behave in our game the better Museum Multiverse will be. One of our goals in playtesting is discovering what players enjoy as well as what they don’t understand. We hope to incorporate  these findings before the September 9th due date.

Until Next time…

giphy (5).gif

PlayNYC and the Awesome Feedback of 100’s

This weekend the team went to PlayNYC. Play was NYC’s first dedicated games convention and it felt a lot like PAX in it’s early days according to game veterans.

PLAYNYCStage.jpg We got to showed off an interactive trailer of The Take. This mostly had the mission briefing and traps you can set in the room. The players of the experience of course did not listen to anything from the mission briefing and instead they mostly had fun throwing things around an stacking books on the desk.


We had a great time had a ton of feedback and we are now ready to add this to the game.

Week 5: Putting it All Together


This week I finally got to put the player into the first level. My idea for this level is to make the player wake up in a room and find a way out. This will be an introduction to the controls. There will also be a puzzle to get out of the room, this will show a general understanding of the controls by the player. The player need to mater some basic commands in order to continue within the game.

Screen Shot 2017-07-16 at 5.52.52 PM.png

This warehouse section is the part of the game where player will be waking up and start the experience. The player moves around pretty comfortably and the scene looks great, but most of the shaders within this scene are Unity’s standard shader and that is not good for mobile VR. Currently our drawcalls are around 40 for this scene but some areas the drawcalls are nearly 70. This needs to be fixed before we move on within the next level. However, I have hope we can fixed this soon. There are some prototypes I’ve been working on that only has 9 drawcalls if I can figure out how that is being done my hope is that Ernest and I can use that knowledge within the next scenes.

Screen Shot 2017-07-16 at 5.53.04 PM.pngHowever for now this is excellent progress and I cannot wait to continue on Museum Multiverse.  What I have to do next is get my controller scripts working with the character this has been harder than I thought but I will get this working and it will be great when it I do!




Early today, I started to wonder how Museum Multiverse would play if experienced from a first person camera. While I know that first person platformers are not the most praised of game genres, I thought about the focus on art and how players might be able to better appreciate the art if viewed from a first person perspective.

Screen Shot 2017-07-14 at 12.18.55 AM.png

We decided that over the next couple of weeks we’ll create and experiment inside a small mock scene in Unity, focusing more on utilizing the Gear VR controller and manipulating objects by picking them up and turning them around. What if we could pick up a piece of art, pull it in and out, turn it around, and fully appreciate the detail in each piece? Then we can intersperse sections of fast-paced third person platforming action with quieter times of first person appreciation and exploration of art. We don’t have any of the art assets in this room just yet, so we’ll be using simple geometric shapes and common room items to get the feel and controller first.

Screen Shot 2017-07-16 at 7.02.05 PM.png

Screen Shot 2017-07-16 at 7.02.21 PM.png

I’ll continue to work on my third person platforming section, but I can’t rest until I throughly test this first person idea.

My First Week as a Oculus Launch Padder

I have learned and grown so much during the weekend.


My plan for the Oculus Launch Pad scholarship consideration is to create a three minute demo with in the 3rd person platforming experience. This is a genre that is seldom utilized and is yet to be perfected in VR. I would like to take part in expanding this genre by creating a game as a mixture of 3rd person with the immersion of a first person platforming experience. The first obstacle I am facing with creating this demo is obtaining a better understanding of what makes a game worth playing in VR. There are plenty of 3rd person platformers out there. What makes experiences like Adventure Time’s Magic Man’s Head Games and Lucky’s Tail unique in VR. I will be continue to study these to projects in order to understand what makes them special as I craft my experience. I will report my findings and my implementations for my 3rd person platformer in my next blog post.



The Project I am currently working on is called Museum Multiverse.

Screen Shot 2017-06-19 at 12.34.54 AM.png

Museum Multiverse is a virtual reality puzzle platformer. This project is set in 3rd person within VR. The game starts off with a child waking up in an abandon museum where she must travel into the worlds of the art pieces in order to find a way out of this cursed museum. This project is best described as a mix between Playdead’s Inside and Turbo Button’s Adventure Time: Magic Man’s Head Games, but with a greater focus on trilling gameplay mechanics and rich immersive VR worlds. The great part of this experience is along with having a trilling VR experience the player will primarily be visiting world’s influenced by underrepresented minority artist throughout history like Horace Pippin, Kara Walker and Frida Kahlo.

I will continue to update you all on my process on this project. Stay tune same bat-station, same bat-channel.

A Simple Way to Upload Images To Readmes on GitHub


I have had a silly problem since the beginning of my Web Dev career. I alway told myself I will figure it out one day, but never took the time to actually figure this problem out!

The Problem? Posting pictures in Readme’s on Github.


During my time in the Flatiron school there was was so much to learn that uploading images to Github seemed like a waste of time to figure out at in the moment. Now, that I took the time to figure out how to do this I found the solution to be pretty easy. I wrote this post because most of my friends who make their own side project did not know how to upload images as well, so I figured there are way more people out there that do not know this process as well. I will now give you the simple, yet hacky, way to achieve this in a couple of clicks. Here are the steps:

  1. Click on Github Issues within the repository you would like to added the picture to.StartWithGithubCircle1

2. In the issues section of your repo create a new issue by clicking on the green New Issue button. (Do not worry you will not have to actually make a issue).


3 . In the Bigger text box drag the image you’d like to have in your readme.


UploadingTo GithubIssuses

This will create a markdown format of the image and it’s upload url. This is what you want!


Copy the text from the text box then go to your Readme.

4. Insert the copied text into your where you would like the photo in the readme. (Cool tip, you can edit and commit the readme from github instead of pulling the repo changing, committing , then pushing the repo).


This is where I put my image into the readme.

Screen Shot 2016-08-23 at 11.53.54 PM


After you are satisfied with your changes (you can always preview the readme by hitting the preview button and seeing the magical changes you made.)

After the commit your changes. You are all set enjoy your awesome image in your awesome readme:


Also if you’d like to see this repo I added the picture to here is the github location:

If you’d like to see the App in action check out Anime Chase at:


Youtube Tutorial by Dan Shahin URL:


Time Management Frontend vs Backend

Screen Shot 2016-07-25 at 2.47.07 PM

I was having the biggest problem dealing with time on my small weather app. I created the logic on my Rails backend to take the current time format that time object into a 12 hour format then return the formatted date string on the frontend of my app. This worked perfectly… until I deployed the application. I used the Heroku to deploy and host the app, but within the deployment I would gain 4 hours on my app giving my shiny cool new app the wrong time.

Screen Shot 2015-06-03 at 11.01.39 AM

As a junior dev, at the time, I knew this must have been a problem based on where the Heroku servers where located; However I had no idea how to handle this in Rails and I was too new to JavaScript handle it on the frontend. So… to solve this I monkey patched a solution, I subtracted the date by 4 hours  and called it a day. The problem was solved until daylight savings rang it’s ugly head. Once daylight savings happened I had the wrong hour now matter what the time zone the app was opened in.

I really did not understand how to solve this problem until I learned more about the magic of JavaScript on the frontend of an application. I learned that the time will always be right for the user’s location on the user’s browser so I just asked the browser for the time.

Screen Shot 2016-07-25 at 3.16.29 PM.png

by doing this I by pass the backend, server, and Heroku which messed up the time on the app.

Now Forcast gif shows the right time and all is right in the word… for now.