Feeds:
Posts
Comments

Archive for the ‘semiotics’ Category

Following my initial post on the aforementioned forums, the amount of feedback received so far has varied, with more interest shown on the mainstream forums than the deaf/disability oriented forums.

In many cases, respondents suggestions were off-topic, predominantly concerning methods such as force feedback which, while a worthwhile area of research, is outside the scope of my research question. However, the general opinion of the project was encouraging.

Overall, however, only one respondent answered the initial questions directly, which may be attributed to their open-endedness which some users may have found off-putting or confusing. Suggestions given included sounds which can be attributed to a particular mood and using attributes like colour to represent the intensity of the sound.

Another promising lead from the forum posts came from John Bannick, CTO of 7-128 Software, a company which publishes accessible games. He referred me to an article by Luis Valente, Clarisse Sieckenius de Souza,and Bruno Feijó, entitled “Turn off the graphics: designing non-visual interfaces for mobile phone games“. While the article focuses on accessibility for visually impaired gamers, the way the authors have categorised the signs they designed is appealing. Several sources referenced by the article sound promising, including Souza, C.S. (2005) ‘The semiotic engineering of human-computer interaction’ which i’m awaiting in the post. Hopefully this will help me properly establish a link between semiotics and my project, since my previous reading on Semiotics hasn’t been very inspiring.

Read Full Post »

Since we started discussing ideas for our dissertations back in February, I’d been thinking around the area of accessibility in games. I cast a wide net, encompassing many of the existing approaches to accessibility, in my searches. I looked at examples such as sound games for the blind (Terraformers for example), hardware devices, such as switches for those with limited dexterity, and  d/Deaf aware practices in videogames (Noteworthy examples being the captions in Half Life 2 and the Doom 3 Closed Captioning Mod). Given my personal relationships with deaf people, this was an area I am particularly sensitive to, but it wasn’t until I read an article by Richard A. van Tol, The Sound Alternative, that I really began to formulate my ideas.

Van Tol discusses approaches other than just captions for game audio to make games more accessible to d/Deaf players. He begins with a brief description of the problems of relying on audio only to deliver information to the player, before discussing some alternatives. Some of the examples he discusses are action captions like those found in comics (BOOM!), and speaker portraits (where the character speaking is highlighted with a picture-in-picture portrait). The latter example makes sense given that deaf people can rely heavily on lipreading and therefore face-to-face communication. While lipreading would not be required in a game, knowing which character is speaking is essential.

He also discusses sound balloons. Again, like the speech balloons used in comics, these could be used to show how a character is speaking (shouting or whispering). Van Tol also suggests that such sound balloons could also be used for in game sounds.

In a similar theme, he discusses sound visualisations, and this is what really caught my attention. Sound visualisations (images or animations relating to a sound) can, he suggests, be used to show the behaviour of sound and, even more interesting, provide additional emotional information. An example of sound visualisations in existing games are those found in The Sims series. Objects like the telephone and stereo emit visible sound waves and musical notes.

This idea really inspired me because, while I feel that subtitles and captions in games do a wonderful job from a usability point of view (especially for dialogue), sound in games is used for much more than simply helping the player to play the game. In some mock presentations at the beginning of the year I presented this idea to my lecturers and one of them suggested I look at mood as something that sound has a major role in and how my sound visualisations could perform a similar role.

I should stress that I am not looking to replace sound. There seems to be a misconception that being deaf means deaf people cannot appreciate sound. However I know from personal experience that deaf people can and do appreciate sound. To that end, my sound visualisations are intended to enhance the role performed by sound to allow deaf gamers to appreciate the game experience, in this case mood, even more.

Our coursework in May was our dissertation proposal. I proposed the following question:

What is the affect of sound visualisations on deaf gamers’ awareness of the game world?

Additionally, I stated that my research would attempt to answer the following sub-questions:

  • What alternatives are there to textual signifiers as an aid to deaf gamers’ awareness in videogames?
  • How effective are textual/non-textual signifiers at suggesting mood in videogames?
  • To what extent are these signifiers seen as intrusive by hearing gamers?

It is important to note that I intend to develop a system that is beneficial to both hearing and deaf gamers. My initial reading, particularly Understanding Deaf Culture – In Search of Deafhood by Paddy Ladd (2007), highlighted the debate concerning the medical and social approaches to deafness. Specifically that medical approaches tend to see a problem that must be ‘fixed’, rather than acknowledging the negative attitude of society itself which excludes and marginalises deaf people, and that this is the real area to be addressed. I feel that focusing solely on deaf gamers would only reinforce this attitude of segregation, since such a system may have limited mainstream viability, therefore I intend to develop and test a system on both deaf and hearing gamers.

While deaf studies are not my field, I feel that this project presents an opportunity to raise issues of deaf awareness in games design. Despite individual successes like Half Life 2, there still seems to be great ignorance in the games industry when it comes to accessibility, particularly creating games that are deaf-friendly. Where accessibility options are considered, the default is to throw in subtitles. This is much appreciated, but I don’t feel that they are the most effective choice for all scenarios (Especially when you consider that the first language of big ‘D’ Deaf people is a sign language such as BSL).

My research therefore is looking at sound visualisations as an alternative to subtitles/captions to represent sound, particularly sounds used to suggest mood in games.

Read Full Post »