This is a technical blog post about a problem and solution that I encountered with Unity 3D and Azure. Hopefully it will be of use to someone out there.
Lately I’ve taken an interest in Microsoft’s Cognitive Services suite, and given some talks about Cognitive Services around the country. Since it makes sense to take some notes on this process, I’m going to write a couple of blog entries going through some of the services available and some sample code for people who want to try it out. Before I start doing that, I’ll just write a brief intro about what Cognitive Services are and why they’re pretty cool.
Cognitive Services used to be called Microsoft Project Oxford. Since its debut it’s expanded to include more features. This is a suite of intelligent APIs that work cross-platform to provide intelligent data such as facial recognition in images, voice recognition of speakers, and video stabilization. There’s a variety of APIs to explore here. You can access the APIs from any kind of app that you want to interface with them, though I have been coding in C# the most lately so that’s what I’ll use in examples.
You might already be familiar with Cognitive Services already if you tried out the How-Old.Net website back in May of last year. (I recall this going viral, then a big scare about if Microsoft was saving your images and violating privacy, but… the site doesn’t save your images, so don’t worry if you want to play around with it.) There’s also another favorite, the Fetch app, which is at https://www.what-dog.net/. It can recognize breeds of dog, or tell you what dog it thinks you are…
It thinks I’m a poodle.
You should check out all the APIs that are available and just play around. I particularly have fun with the Image Analysis. It’s pretty smart, though it’s not always a hundred percent right which is why it gives off a confidence rating whenever it analyzes a picture. What impresses me as well is it recognizes a pretty amazing variety of celebrities, including people from television and Broadway. Also, I will stealthily point out to you that it judges pictures on their Adult Content and Raciness Rating, which has lots of practical and impractical applications.
As an Interactive Fiction fan, I’m also really excited about the possibilities in the natural language understanding API. With my background, I think of it as a potential for a much better text parser for a conversational IF. I’m going to be approaching a new project from that angle.
If you want to do more than just play and use these in an app, you’ll need a Microsoft account to log in and get API keys. (It’s the “My Account” button in the top corner.) I’ll post step by step later, but if you can’t wait, you may also see me at Philly Code Camp this Saturday discussing them! Maybe see you this weekend!
I’m leaving this here to avoid causing any confusion, but I am sorry to report the event has been postponed.
We’ll update if there is a rain date for this event as we are still committed to bringing the combo Vive and HoloLens content at a later date.
Original copy below:
This post is to document a weird error for posterity. But first, some background:
We’ve been working for a while on a device that uses the Raspberry Pi 2 running Windows 10. It’s neat! If you want more information about how to install Windows 10 on a Raspberry Pi 2 or 3, go here:
Basically, the steps are:
- Get Pi
- Get an SD Card for the Pi
- Download and install Windows 10 on that SD Card
- Slip that card into the Pi and boot
- Connect your board to a network (I used wired at first, but a wireless gadget can work too)
- Deploy code to the board
The first few steps of getting Windows up and running are really straightforward!
But: I encountered a problem sometimes. Folks told me it’s worth it to write this entry in case other people run into that same problem.
Windows 10 worked great for me when my Pi was plugged into the wall. However, I’m creating a device that I want to be portable, so I’m powering the Pi, and the Pi’s tiny screen, with these portable power supplies that run on AA batteries. They work great! Mostly!
However, the battery drain on the device from running an entire computer and a small monitor was more than I initially imagined. And batteries don’t die evenly. When the battery power is low, but not entirely dead, the Pi might give the impression that it’s working even if it doesn’t actually have enough juice to work. When that happens, it will start dropping off the network, not allow you to push code, and then, finally, it will lose Windows.
When that happens, you might get the dreaded Frowny Face error.
This error is very mysterious. A frowny face definitely tells you something is wrong, but it really doesn’t help you understand what.
I did find the frowny face hilarious though. My Pi is currently in a 3D printed case that has a small window, so the frowny face made it look just like Game Boy from the Captain N cartoon…
Sorry, I’m showing my age. I think this is also a character on Adventure Time?
It’s the same character, people; it’s amazing! Only mine was, as I mentioned, frowning.
Anyway, I discovered that the problem, essentially, was that the Pi had some power, but not enough power. When it didn’t have enough juice, it wasn’t able to boot up Windows 10 after all. All it could do about this was be sad! The problem was fixed when I simply replaced the batteries in my portable power supply. I rebooted Windows, and I was able to push code again. But since it seemed like it was sort of working, I’m ashamed to admit it took me quite a while to figure out that the problem was the power supply, rather than Windows or the SD card. This is especially true because it doesn’t all fail at once. First pushing code stops working, then Windows stops working, as the power gets lower and lower. So if you do get the frowny face, don’t despair! Try getting more power and the Pi will work again.
I know I usually write about games and tech, so forgive me: I’m going to write about something personal (with, maybe just a little about tech).
Over the past two years I’ve started taking my health more seriously and worked hard on losing some weight. Microsoft incentivizes employees to get yearly checkups. When I got my first checkup, I realized that my weight had climbed up over 200 lbs, and I was not happy about that. I don’t necessarily think that weight loss is vital for everyone’s health, but I wanted it for myself.
As of this writing I’ve lost about 50 pounds over the last two years. It’s been a gradual process. Last weekend I went out shopping, and bought some clothes that actually fit me. So this week, when I went out to see friends, the difference was more noticeable. People who haven’t seen me in a while always remark that I look very different. That’s a good feeling, but still a mixed feeling. I feel like I still have a long way to go. I will talk about my process and journey with enthusiasm to anyone who asks, so I figured I’d go ahead and put it in writing to get some of those feelings out in a more organized way.
Anyone who knows me knows how extremely excited I was for the HoloLens, the Mixed Reality device being developed by Microsoft. When I saw the live demonstrations of the device, I knew I had to have it.
All that being said, as excited as I was about the HoloLens, I was excited about it for reasons beyond my usual. HoloLens has tremendous potential in the fields of science, medicine, construction, education, and engineering. I was not really sure how it would fare as a gaming device, however. It seemed likely that casual games could find a home there, but could a device that overlays holograms onto the real world be home to a deep narrative experience?
Well, now I’ve played the game Fragments on HoloLens, and I am convinced.
Recently, a student (I’ll leave out details) emailed me these questions about the process of game development and how to get started. I haven’t posted since MAGfest, so I decided to answer them here too. That way anyone who is interested in answers to some basic questions can see them. I think beginners to game dev make a few assumptions that there are hard and fast answers when there often are not, especially since there are so many routes to game development. Everyone is going to give you slightly different advice, so here is mine.
I was asked:
I just returned from MAGFest in Maryland and I have had a terrific time. I am so proud to be part of the MAGES group at MAGFest and enjoy giving panels to talk about my experiences as a gamer and game creator. Plus, the music rocks!
I have a full writeup now up at Tap-Repeatedly.com, but I want to just shoot this quick entry here for anyone who found my blog via meeting me at the festival.
I have done a big update today to my Upcoming Events Calendar, so if you missed me at MAGFest or if you want to see me again somewhere else, check there to see where I’ll be next! This list isn’t final, so I may add a few more things, particularly in April as Philly Tech Week gets more planned out.
You may notice there’s a bit of a gap in March. Sadly, I will not be attending GDC this year. I’m taking the month for some needed vacation, to visit family, and to work on some other professional projects. You’ll see more video blogs from me on Channel 9 really soon! If you haven’t yet, please check out my latest on the Game Dev Show: “What Does a Universal Application Platform Mean for Game Devs?”
Since this summer, Microsoft Evangelists have been working hard to put more content on Channel 9, Microsoft’s home for tutorial and video blogs.
Today, I uploaded a short video, a ten-minute summary of the hour-long talk I gave at CodeMash about mobile game design! I hit the highlights and go over some tips about mobile design I’ve learned from years of observation. Check it out here:
The Raw Tech blog series is a series uploaded by Evangelists like myself. There’s been so many new videos uploaded it would take days to watch them all now, but some stuff that might be of particular interest to people who read my blog:
Stacey Mulcahy on using a Breadboard (important stuff for beginning Makers!)
This year, I’m going to help contribute to more game content on Channel 9, and Livi Erickson’s awesome AR/VR show!
In the process, I went back and read my Uncharted review. I played Uncharted really late, and I actually didn’t much like the first Uncharted when I played it. On the other hand, I liked the Tomb Raider reboot and its sequel quite a bit. This is despite the fact that, as I mention in my review, they use more or less the same game format as the first Uncharted. So what’s different? I spend a quick paragraph on it in the review but I want to examine in a more rambling fashion this idea of environmental affordance. I think it’s an important component of modern game environment design.
I’m one of the rare gamers that has written some stuff critical of Final Fantasy VII. Just this week I read this article about the remake trailers, written by Brendan Keogh . I think it’s interesting that he talks about how FFVII “leans into its technical limitations” because I’ve always found the art direction in FFVII uneven for this very reason. Sometimes the environments were so high fidelity in dungeons compared to my weird little block character that it wasn’t even clear where I was able to walk. Fortunately, the designers of the game knew this, and allowed an optional waypoint graphic to appear when needed. This was a trend-setter for many years to come.
In the modern days, we have “Detective Mode.” This is most famous from the Rocksteady Batman games, and in the first game, Arkham Asylum, it’s so useful that it’s basically pointless to even turn it off. Tomb Raider has a similar vision mode called Survival Instincts. It’s balanced by the fact that the player can’t leave it on while in motion and it only flickers up for a brief time. That is, unless you disagree that it is balanced at all. I’ve seen some people such as Andrew Reiner here write that the mode makes the game a bit too easy.
Tomb Raider/Rise of the Tomb Raider do another thing that’s good, though, and make objects in the environment that can be interacted with very similar in appearance. Any tree in Rise of the Tomb Raider that I can climb looks like every other climbable tree, with a flat bit of exposed wood under the bark and some obviously stripped branches. Rock walls suitable for using the climbing axe all have the same pocky-looking bump map. And most ledges Lara can hang from have a slight white highlight on the top edge, usually a streak of paint, though sometimes it’s just a patch of snow or a trick of the light. This may not be realistic, but I don’t care. It’s a price I’m willing to pay for it being really obvious what I can and can’t interact with in the environment. This part of the game’s texturing is consistent enough that I rarely needed the Survival Instincts to figure out a traversal path, though it was useful occasionally, especially if the way forward wasn’t immediately clear.
Consistent assets help out with affordances as well. There are a few traversal methods later in the game that require objects. If there’s a place I can axe-grapple and swing, the hook that I need to hang from always looks very distinct. The weights and cranks used for puzzle solving are always similar-looking assets as well. This is probably convenient for the developers in that they can re-use the same environment assets from time to time, but it’s also incredibly useful for gamers in that an axe crank always looks like an axe crank. This way I can get to figuring out how to solve the puzzle, instead of just milling around trying to figure out which part of the puzzle is the interactive part.
These two factors combined make me wonder how the game would play without the Survival Instinct vision. But overall I found the vision mode just too useful to live without, especially when finding collectables or enemies in the environment. I think overall, Rise of the Tomb Raider would be a terrific game to study for a basic primer in how to make environments read clearly even when they’re dense with information. This kind of stuff means the difference between a game I enjoy, versus a game that makes me want to tear my hair out in frustration.