Stuff that I’ve been doing

It has being some time since I did a post, well that’s because I’m in a bit of a crunch time right now which mostly concentrates on my Thesis work and my actual work.  I’ve got 3 as an Instructor for an introductory class in Computers and IT. I have to say its being a really nice experience, and next semester would probably be my last doing it.

My thesis has to do with the study of Immersion and Presence in Videogames. Right now I won’t go too much in detail on it, just let me say that I’ve being working with UDK to setup my experiment. I have learned a lot of 3D games design these couple of months and a lot on how Videogame 3D engine works as a whole. Although there’s a learning curve, engines help you a ton, and definitely, if engines like this didn’t exist my thesis would be impossible. After I finish my work here I would like to go even go further and work in a fully fledged game using this tool.

Personal projects are also rolling a bit slowly this past month. We got a big bummer when National STEM Video Game Challenge finally revealed their rules for the 2012 competition. In few words, they don’t accept international students. Oh well, we were working on this really awesome concept and an innovative technology for the game (kudos to Eugenio for that), but the project got canned because of this. In any case we will move on to another competition with a different proposal. I’ll post more on that later.

Other than that I have been elected as President of the International Student Association in UTPA. Think are going smoothly and we had two fund raisers already. So far the entire organization have being helping a lot, so I think this next semester we will be able to do some trips and events.

Got a lot of ideas for some post, hopefully I will be able to write them and share it with you soon enough!.


Pushing the wrong buttons

Sometimes it’s amazing how a lot of stuff it’s made without even thinking in the people that it’s using it. I’ve seen this problem in many things, not only Computers.

This past Friday I was using an elevator and had this interface, pretty common right?

Elevator Panel Picture

Well there are several things that we can do to make a better panel, but let me focus on one thing here that is pretty common sense.  The close button it’s there and its function is to close the door before the pre-programmed delay that the elevator doors have, usually this is used when you are in a rush and there’s no more people boarding the elevator in a way save some time. Next to it is the Alarm button, used for signaling an emergency, like if you got trapped in the elevator because of malfunctioning.

The alarm button should never be pushed if there’s no problem at all, first usually it triggers a very loud alarm that can confuse or even freak out people in the building, but it also might be an unnecessary waste the time for people in charge of the emergencies to come and check for a fake alarm. And there it is, just sitting to the side of a button that is used when you are in a rush or simply don’t want to wait, an stressful situation that increase probability of error.

How much time will pass before anyone pushes the wrong button? Not much of course.

Not only that, check the close button icon. Two arrows that can be interpreted as coming from the buttons between them, in other words, people can get confused if the arrows are just implying an action that the elevator does or they are coming from the buttons between them which may indicate their function. This will require you to use more cognition in order to get it, something we don’t usually do in a rush. What about blind people? They actually separate the Braille code in that particular icon implying that the left button might be a choice.

This could be remediated by keeping the panel consistent in its spacing, and while it does a good job by maintaining the buttons to the right of their respective label, it’s amazing how little details like this can actually create big problems in the user interface.


Cell phones + Kinect = Good idea?

Kinect and the iPhone
So yesterday I found a video about how technology like “Kinect” is reaching the cell phone market. Pantech is realizing a phone with this technology in Korea around November. Video of the commercial below:

First impression, pretty neat. Now let’s disassemble this idea.

In a practical sense there’s some stuff that are not being apparent here. When some it’s calling you could pick up a call easily by using a gesture as the ad shows, but when you are the one that wants to initiate a call it won’t work. Cell phones usually will turn off all their sensors when they are locked to save battery, there’s simply no way to keep the camera on for every time we want to perform a task with our phones, and as I said, this won’t be a problem for picking up a call since the phone it’s already awake when it rings, but when it’s locked or sleeping the phone is not listening to you all the time. Think of it as if you could use Siri in the iPhone 4S without touching the button.

But there’s definitive practical use for this, and might add it’s really a cool idea to add this functionality. Although have a camera that actively checking for hand gestures might be a real battery drain, there might be a way to build a decent camera that performs well the task without killing your battery every time you are using your phone.

My main problem with the video it’s the lack of vision and innovation. By reproducing gestures that already work well with touch you are just creating a gimmick, a device that it’s just for show off and adds very little to none to its purpose. Swiping photos by hovering your hand across the device doesn’t make it better, it’s just cooler, a factor that wears off in a week or less, and can actually get the user angry when the battery life will be inevitably shorter, they either introduce an great advancement in energy management or the phone will be pretty bulky.

Still the idea it’s awesome, but it must be justified with great implementations, like upgrading current touching gestures or by creating useful gestures that can’t be done with current touch devices. A nice example will be to use a depth sensor to check how close or far the hand it’s from the device and make the GUI react to it as a function of those changes. For drag and dropping this could be perfect, imagine you could pinch into several objects to group them, then raise your hand, scroll around the GUI and the drop them whenever you like. This kind of example could be really uncomfortable to do on an “only touch” based device and a visual recognition approach could be useful.

Drag and Drop concept for "Kinect" powered phones

Drag and Drop concept for "Kinect" powered phones

That was just one example on what you could do with this kind of devices. Another would be to replace a lot of the 3D gestures that usually needs of an anchor, in other words, the use of two hands.

Of course, such a device would have to be planned well in all levels to achieve an integral solution, but I think feasible to create as a mass product in the near future. It’s a good idea but I don’t think the product shown in the ad will popularize it.


First!

Hello everyone and welcome to my blog. I set it up because I want to expand my online presence by sharing with everyone ideas, things that I learned and personal opinion about the topics that I love the most: Computers, Interfaces, User Experience / HCI, and Videogames.

I’ll be talking about these topics across their whole spectrum, from Computer Science and Engineering to Psychology. So yeah, who knows? Maybe you will see some talk on how user behaves to a certain interface and then suddenly some code appears in the next post, and then quick draft of an idea.

Hopefully people will find this site insightful. That would be my main satisfaction.

Thanks for reading and stay tuned for my first REAL post.