January 29, 2011

Monocular Cues: Let's Explore Some In Depth (pun intended)

Being able to see and perceive depth helps humans perceive a three dimensional world. Cues from the environment, such as relative size, shape, shadow and texture, are needed in order to achieve this feat. Monocular depth cues are ones that only need one eye to perceive. In this special edition of SINsations - The Seven Deadly Senses, we will explain the ten types of Monocular cues, which can be categorized into two subtypes: Pictorial cues, ones that occur in picture form, and Motion-Produced cues, ones that occur when the observer is in motion. 


Occlusion


When encountering Pictorial Cues, one must always be careful when considering Occlusion. As Pictorial Cues go, Occlusion isn't very useful; he only turns up whenever something in a person's field of vision obstructs her view of another object. He's developmentally retarded that way, but don't tell it to his face; giving obstructions the power to assert their proximity over occluded targets is all he ever wanted to do with his life.

Here are several amusing pictures; Occlusion may not be very helpful, but he sure can work a crowd.


The cowboy is in front of the woman.


The snake is in front of Van Damme's fist.


The child is further than the Ewok.


The lady is farther afield than the man's hand.


The Velociraptor is behind the post.


Samuel L. Jackson is behind the drink.


Mikki is in closer proximity to the camera, when compared to Allen.
Relative Height
Basically, the cue of relative height means that objects that are below the horizon and have their bases higher in the field of view are usually seen as being more distant. In this picture of the Beatles on Abbey road, we can see that the Beatles are of the same distance because their bases, their feet, are of the same height. If we look at the white car and black car we can see that the black car is a bit farther because the base of the black car, its fender, is higher in the field of view than the white car. 


Relative Size
The cue of relative size means that when two objects are of equal size, the one that is farther away will take up less of your field of view than the one that is closer. This cue can be seen in the image above. Notice for example the number eight ball and the number four ball, we know that the two balls are of equal size but the number four ball looks smaller and takes up less of our field of view because it is farther as compared to the number eight ball. 


Perspective Convergence
In a picture showing depth, two parallel lines that stretch out into a picture appear to come closer and closer together as they extend into the distance. In this picture for example, the lines on the road are actually parallel, but in illustrating the depth and distance in the picture, they look to be coming closer towards each other the father out they extend.


Familiar Size


Using prior knowledge about the size of objects to judge their distance is using familiar size cues. Simply put, under certain conditions, our knowledge of an object’s size influences our perception of its distance. In the example, we see the Eiffel Tower and a person. Based from prior knowledge, we know that the Eiffel tower is very big, around 320 m tall, and that the average person stands at only about 1.5-2 meters. This suggests that the Eiffel tower is so far behind the person to look that small.



Atmospheric Perspective

Atmospheric Perspective occurs when objects that are farther or are more distant appear less sharp and seem blurred with a slight blue tint. The farther the object is the more particles (air, dust, water particles, etc.) there is that we have to look through. In the following examples, the objects that are nearer are sharp with clearer details compared to the objects farther away.



Texture Gradient

A texture gradient can be used to show depth in a still frame. In this picture, we know that the flowers are all actually equally spaced and distributed in the field. But as the field extends into the distance, the flowers appear to be more tightly packed than the flowers that should be closer to the viewer. This gradient of texture creates an image of depth.     
Shadows


Shadows are created whenever light is occluded by an object. The dimensions of the resulting projection can be processed using the Pythagorean theorem to compute for the object's size. This is very useful for measuring tall, well-lit objects like flagpoles; many a high school student has successfully applied the teachings of Math to such a worthy assignment.


Shadows are also pretty useful when assessing the contours of an observed object. The variation in projected shadow size and intensity can do very much for 3D perception.


Shadows really bring melee combat to life, don't they?
Godzilla sure looks big. Thank lighting for the neck-shadow!
A lightsaber will help shine the way.
Big shadow = big Decepticon.
The Shadow: Bringing depth to 2D art since a long time ago, in a galaxy far, far away.
Even without familiar objects for comparison, you know these AT-ATs are big. Thank the shadows!
Motion Parallax

During car rides, people are fond of looking out their windows to enjoy the scenery outside. As experienced in one of my night rides, it can't be helped but to notice how the different colored lights on the portions of nearby buildings on my side of the road seem to blur as they move past by me. However, the moon seems to move at a slower rate; the same can be said for the mountains on the horizon. How is this possible when they seem to lie on the same plane of  vision? 
Motion Parallax is the key to understanding this perceptual phenomenon.  The use of this depth cue in cartoons may provide an explanation. 
We came across this interesting short clip of a mouse using a rocket to fly over varied terrain.


We can observe how the trees, tall shrubs, or grass (I don’t exactly know what they are) seem to glide rapidly past by our vision compared to the clouds and mountains that comprise the distant background. Differences in the speed by which we see objects when we are moving can be accounted for by our retinas. As we are subjected to motion, the image of nearby objects, in this case the tall shrubs, seem to move more rapidly past by us because they travel at a farther distance across our retinas compared to distant objects such as the moon and the mountains.
Many computer games and movies have made use of this depth cue to enhance movement perceptions in scenes.  Commercial value is improved as they become more entertaining to those who view these technological advancements using motion parallax.

To learn more about Motion Parallax, you can visit this site: http://psych.hanover.edu/Krantz/MotionParallax/MotionParallax.html


Deletion and Accretion




Because of academic and other kind of stress(es), I closed my eyes and play with my hands.  I positioned my hands as my right hand is at arm's length and my left hand is at about half the distance, just to the left of the right hand.  And as I moved my head sideways to the left and then back again while keeping my hands still, covering rgiht hand (deletion) and uncovering it (accretion) are observed.

January 22, 2011

When I see your Face, there’s not a thing that I would change... By Patrick N. Rabanal


‘Coz girl you’re amazing just the way you are.

Don’t you feel like just melting when you see the face of your crush? May it be while walking along the corridors or seeing a picture of that person. You may not know it, but there are a lot of different parts in your brain that work together, to really “see” that face.
In class we discussed how different areas of the brain were involved in face perception: the occipital cortex for initial processing, the fusiform gyrus for identification, the amygdale for emotional aspects, the superior temporal sulcus for gaze direction, and the frontal area for evaluating attractiveness. This supports the idea of distributed coding or representation of images like a face, in the brain.
In Perception of Face Parts and Face Configurations: An fMRI Study, Jia et al., focused on three areas that responded selectively to faces: the occipital face area (OFA), the fusiform face area (FFA), and a face-selective region in the superior temporal sulcus (fSTS). They looked into how these parts responded to two important aspects of face perception, faces parts (eyes, nose, and mouth) and the T-shaped spatial configuration of these parts. Using the Region-of-Interest approach, where each subject was pretested to find specific areas of the brain, the experimenters were able to identify the location of these three parts. For the experiment, they used photographs of faces that were manipulated, having face parts or not, and having different configurations (T-shaped face configuration or scrambled). Brain activity in these regions was measured with every stimulus presented.
Results show that the OFA and the fSTS are sensitive to the presence of face parts in the stimulus but not to the presence of a veridical T-shaped face configuration, whereas the FFA is sensitive to both kinds of information. Further, only in the FFA is the response to configuration and part information correlated across voxels, implying that the FFA contains a unified representation that includes both kinds of information. Consistent with other studies, they found that OFA is not only activated by but is also necessary for the analysis of face parts. It gave them the idea that it conducts a relatively early stage of face processing, earlier than the FFA. This suggests that the FFA and the OFA comprise a hierarchical network for face perception with the FFA inheriting the part sensitivity of the OFA, and then further integrating or elaborating this information to include sensitivity to the spatial configuration of these parts.
Much like the OFA, the fSTS was sensitive only to face parts, not face configurations. This finding is generally consistent with previous studies that associate this region with the discrimination of gaze direction and expression.
This is just another evidence of distributed representation of visual images in the brain. Aside from just seeing your crush and suddenly feeling bubbly, we now know that there are many areas of the brain involved in perceiving faces
 Aside from this, it makes me wonder what the difference would be in brain activity seeing faces that are attractive and those that are not or how can the frontal area tell us what is attractive and what isn’t.

Source:
Jia, L., Harris, A., & Kanwisher, N. (2010). Perception of Face Parts and Face Configurations: An fMRI Study. Journal of Cognitive Neuroscience, 22(1), 203-211.
Photo from: http://www.flickr.com/photos/dominiquejames/458812863/

The Captivating Face: Look at me I'm Smiling by Vhina Sison




We all hunger for attention, and our eyes provide an open door to a variety of selections to feed our ever growing appetite for the company of others. Attention helps us selectively focus on certain things in an environment (Goldstein, 2010). As indicated, we scan a scene for example in a crowd or social setting and fixate on a certain stimulus and focus our attention on them, stimulus salience makes them stand out. However stimulus salience can vary in context. 

The face is as essential ingredient that facilitates capturing one’s attention and to help us get noticed in a social setting.  However, do emotional facial expressions really play a key role in making several people who display specific expressions more noticeable? However, the even bigger question is what possible emotional facial expressions might be of bigger impact to grab one’s attention?  What facial expression would best help take hold of the attention that we desire from others? What would make one more attractive and thus capture one’s visual attention? Let me give you a bite of this lusciously, intriguing experiment that would certainly give you that satiety to your curiosity of the answers to these questions.  

William et al. (2005), in a 3-part experiment explored whether non-threatening and threatening facial expressions were more likely to attract attention compared to neutral faces. In the first experiment, 12 participants were subjected to conditions that gave them a visual search task with a happy or neutral face being the target stimuli. A set of faces (either in 4’s or 8’s) were shown on screen for 1000ms to the participant and are instructed to press on the button corresponding to the target face that they are to search for (a happy face among neutral faces or a neutral face among happy faces). A four-factor within subjects  design was used to interpret the results, and thus found that participants were faster to detect and inverted happy face than an inverted neutral face. Thus results indicate that happy faces are better detected than neutral faces, and when subjected to distortion (inversion) shows no advantage over neutral faces.
For experiment 2, a similar set-up was conducted however a sad face was used instead of a happy one. Results were similar to the first one, and thus showed how sad faces are better detected than neutral faces.
The 3rd experiment gave more color to the results of the overall study. Here, several conditions were given to participants; one condition was cued wherein they were told to search for a specific facial expression among neutral faces and instructed to respond to it as quickly as possible by pressing a designated button. While the other condition, instructed participants that they should press the button specific button corresponding to a facial expression (happy, sad, fearful and angry) when they see any in the array of faces.

Results show that there were no significant effects on non-threatening versus threatening facial expressions rather happy and angry faces showed to capture one’s attention more by measuring response time compared to sad and fearful faces.  (Interestingly to take note, is that angry faces have revealed to attract attention more than a happy one.) 


After having read what this study revealed, I now believe that happy faces are indeed the way to go. No wonder extraverted people who smile more and are deemed to display positive attitudes like being cheerful and jolly seem to have more friends, get the job position, engage in satisfying relationships and are the center of attention in parties.




Even if the studies show that angry faces attract more attention than happy faces, it’s associated with a negative type of attraction wherein it is posed as a threat to the person who sees them. No wonder attention all on someone who is angry in a social situation, especially when there are dramatic scenes involved but then again people are more likely to keep their distance as to not aggravate the scene. 
So if you want attract attention from people as to initiate establishments of possible friendships or you simply want to captivate your crush’s attention, why not put up that killer smile of yours. 

Sources
Williams, M. & et al. (2005). Look at me I’m smiling: Visual search for threatening and non-threatening facial expressions. Visual cognition, 12(1), 25-29
Goldstein, E. (2007).Visual attention. Sensation and Perception, 134-136.

BEER GOGGLES? Geraldine Garcia

Everyday from the moment we wake up until we get back to bed at night, we are constantly surrounded by an overabundance of visual stimulation. What’s cool is that our brains are actually designed to be able to manage all of that, so we don’t have to worry about it ever being too much for our receptors to handle. It’s interesting actually, the way we’re able to focus on some things and not others, and then how we can just suddenly shift that attention from one object to the next.       
This week, one of the topics we learned about in class was visual attention. What interested me most about this lesson in particular was the concept of inattentional blindness, or being unable to perceive salient stimuli in your direct field of vision because you’re busy attending to something else. I’m sure everyone’s experienced it one way or another. It happens to me quite often, actually. Just the other day I was running late for class, and as I approached my classroom, my gaze was so focused on the classroom’s doorknob that I didn’t even notice the big sign on the door saying “NO CLASS TODAY.” :|
It got me thinking about how much inattentional blindness could affect people in more consequential situations than free cuts, like maybe in medical operations or eye witness accounts or warfare or driving! Interestingly, the article I came across was a study on the effects of alcohol on inattentional blindness. Alcohol consumption is a known cause for so many vehicular accidents all over the world, which is why its relationship with inattentional blindness is a significant matter to probe. 
In the study, participants were given drinks at a simulated cocktail lounge environment. They were told either that they received an alcoholic beverage or a non-alcoholic beverage. Some were told the truth about the drinks they got and some weren’t. This created four conditions, one where the participants were told they received alcoholic drinks and actually got them, one where they were told they received alcoholic drinks but actually got non-alcoholic drinks, one where they were told they received non-alcoholic drinks and got alcoholic drinks, and one where they were told they received non-alcoholic drinks and actually got non-alcoholic drinks. After consuming their beverages, the participants were instructed to watch an edited version of Simons and Chabris’ gorilla experiment video and tasked to count how many times the players wearing white shirts passed the ball to each other. In the middle of the video, a gorilla walks in, beats its chest and walks out of the scene. After the video, they were asked whether they noticed the gorilla in the video or not.  
The question we now ask is: did alcohol consumption affect the participants’ inattentional blindness? I’m sure you could’ve guessed that it did. Only 18% of the participants who received alcoholic drinks (whether they were told they were getting alcoholic beverages or not) noticed the gorilla appear in the video. Among the participants who did not receive alcohol, 46% percent were able to spot the gorilla when it walked in and beat its chest in the middle of the ball passing.
Based on the study, we can conclude somewhat generally that mildly intoxicated individuals are more likely to experience inattentional blindness than people who are sober. The implications of this finding on driving are consequential. Even a small amount of alcohol such as the amount used in this study can increase the likelihood of inattentional blindness. The repercussions of accidents caused by these drunk drivers could be fatal.
I guess if there is anything to learn here, it is that although our visual systems are constructed in such a way as to manage the profusion of visual stimuli that surround us, there are factors that can affect its functioning. Conditions such as inattentional blindness (increased by alcohol consumption!) not only causes simple careless mistakes, but harmful accidents as well. So remember everyone! Be careful, be alert, and be safe. Never drink and drive :)


Source:
Clifasefi, S. L., Takarangi, M. K. T., & Bergman, J. S. (2006). Blind Drunk: The Effects of Alcohol on Inattentional Blindness. Applied Cognitive Psychology, 20: 697–704.

Mother Earth 1-Humans 0: Erik Andrade Tongol


-Lust-


Everyday we are bombarded with thousands of images. Be it walking in a crowded city street or in a deserted forest trail. Our brains are amazing because we only use general descriptions of the type scene or gists to identify scenes. In a fraction of a second we can rapidly perceive the gist of a scene. Our brain would get tired easily if we have to process all the details in the scenes that we encounter everyday plus it would take forever to do this. This was our topic in Psychology 135 Sensation and Perception yesterday. I tested this while I was watching television earlier. As I was flipping channels, I could easily obtain the gist of the show in a mere fraction of a second. This made it easier for me to pick a show that I was interested in. I was somewhat impressed by my ability to obtain the gist of the show rapidly. This was the reason why I chose a journal article about perceiving scenes and their gists.
I came across an interesting journal article called “How long to get to the gist of real-world natural scenes?” by Fabre-Thorpe, Joubert, & Rousselet (2005). Their study examined the processing time of a natural scene in a fast categorization task of its gist. A total of forty-eight adult volunteers for both experiment 1 and 2 were tested for this. In the first experiment, the participants categorized colored pictures of real-world scenes belonging to two natural categories: sea and mountain, and 2 artificial categories: indoor and urban. Experiment 2 is basically the same as the first one but both colored and black and white scenes were used to examine the role of color on performance. 384 images for each of the four environmental categories were used and they were flashed to the participant 26ms each. Results were consistent with both experiments showing that natural environments could be classified faster than artificial environments. Also, experiment 2 showed that color information does not appear to be the most crucial feature used to categorize real world scenes, This study further strengthened earlier studies that the gist of real-world scenes could be acquired with high accuracy and rapidly.
The article was very informative. I applaude the researchers because I would have never imagined that there was a difference in perceiving natural and artificial scenes. The idea was very simple but very relevant and creative. 
It is fascinating to know that there is a difference in getting the gist of natural scenes as compared to artificial or manmade scenes. That is, we create gists of natural scenes significantly faster than artificial scenes. This information may be very useful to me because I like going out. I like traveling and going to new places. I especially love to jog in places where nature surrounds me. Seeing new sights is a hobby of mine. Now I know that I react better and faster to natural scenes rather than manmade scenes. You may never know that the few millisecond difference in perceiving the gist of those scenes can spell life or death for me in an emergency situation.
Yet again, the brilliance of mother earth triumphs over manmade objects.


Source:
Rousselet, G., Joubert, O., & Fabre-Thorpe, M. (2005). How long to get to the 'gist' of real-world natural scenes?. Visual Cognition12(6), 852-877.

Pride: Female Ovulation, Visual Attention & Memory, by Mikki Miranda

As a bona fide heterosexual woman with indubitably natural woman tendencies, I admit that I sometimes feel my womanly senses tingling whenever I see a subjectively attractive man. (Believe it or not, I had put aside my feminist-pride card for this.) It may seem for you, my Y-chromosomed reader, rather creepy and slightly objectifying, but I just can't help it.  Consider this hindsight bias or something else, but I think that even if I have successfully been able to acquire a stable manboy for myself, I find good-looking men a sight to behold. (I can't give examples though, no one will actually believe me if I do.)  But seriously, because of my seemingly innocent natural reactions to beautiful creatures, this led me to having a heightened interest on pursuing how and why the human mate selection scheme operates in real life. Moreover, having to read those recent articles relating the key role of unconscious biological processes to mate selection was pretty exciting - to me it's like a perfect menage a trois of psychology, biology and human evolution!


If you don't know what menage a trois means, just stare at this picture for 5 seconds.

Yes, I'm pretty sure we're fully aware of those experiments about male and female preferences: several researches found out that males are relatively more nonselective about potential relationships than females, who have generally higher standards in those areas.  Predominantly explained because of the cost of energy expenditure in child rearing, women are more likely to select males who would at least appear to provide ample resources for the offspring. But I am not here to babble further about the initial researches on human mate selection scheme, I am here to discuss an experience that I guess only women can relate to: OVULATION. (Male readers are still encouraged to read further. I will not be talking about the gross stuff here, Sir Mamaril already took care of that.) When it comes to female ovulation, one of the well-known researches are Penton-Voak et al's (2003) study on the relationship between a woman's stage in her ovulation cycle and her preferences in males. They found that ovulating women are more attracted to men showing high levels of masculinity, and the alternative occurs vice versa. They explained that symmetry and masculinity seem to reflect the possession of genetically-suitable traits for survival. 


Allegedly the epitome of American masculinity


Despite evidence of ovulatory shifts related to behavior and expressed preferences, the question still remains: how can ovulation affect early-stage cognitive processing? In the 2010 research entitled "I only have eyes for you: Ovulation redirects attention (but not memory) to attractive men" by Anderson et al, they hypothesized that ovulating women tend to have an increased attention to handsome men compared to their non-fertile counterparts. They further predicted that such enhanced cognitive processing, i.e. visual attention, should lead to increased memory for the particular man-target.


Their methodology was actually quite simple. The ninety female participants were divided according to whether they were fertile or not. Using an eye tracking software to stabilize eye movements and measure attention, the participants were shown four pictures of neutrally-expressive faces of men who were pre-rated according to their physical attractiveness. They were then given a memory test in which they would have to indicate whether they have or have not seen the faces that were shown to them.


I feel sorry for this guy. Dubbed as "Least Attractive Male Face". Can they at least comb his hair?


So what did the experimenters saw? Using the eye tracking device, they found out that ovulating women paid relatively more attention to attractive male targets in arrays of varying faces. However, they also found out that the fertility status of participants had no effect on attention to other face types, and did not produce an analogous effect on memory. Basically it means that women would pay initial attention to handsome men, but this increased visual attention would not likely translate into better memory of them. 


Why is this so? One explanation may be because the visual attention of highly fertile women did not actually reflect increased cognitive processing but rather a strategic (albeit unconscious) inclination to communicate romantic interest to those desirable men. Also, the researchers suggested that an additional "cognitive suppressing mechanism" in women prevents them from expending more processing than what is necessary especially to male strangers. They further expect that women in highly committed relationships experience this effect more than non-committed women.


The research definitely showed promise in the future of visual attention and its relationship to human mate selection, but I honestly felt that the researchers' explanations did not suffice as much. One reason is that their discussion section did not relate the effects of ovulation according to its actual behavioral and cognitive effect on women. Also, as I've learned so far in our Perception class, attention and memory are composed of different factors and components that cannot solely be measured by eye movements and simple questionnaires. I believe that an imaging approach, through an fMRI or PET scan, in addition to eye tracking, would possibly be a better way to describe the cognitive processing expended upon visual attention and attraction. 


Man in dark blue: my personal eye tracker
  




Source: 
Anderson, U., Perea, E., Becker, D., Ackerman, J., Shapiro, J., Neuberg, S., & Kenrick, D. (2010). I only have eyes for you: Ovulation redirects attention (but not memory) to attractive men. Journal of Experimental Psychology, 46, 804-808.

Penton-Voak, I. S., Little, A. C., Jones, B. C., Burt, D. M., Tiddeman, B. P., & Perrett, D. J.(2003). Female condition influences preferences for sexual dimorphism in faces of male humans (Homo sapiens). Journal of Comparative Psychology, 117, 264–271.

Wrath: Video Game Realism and Aggression, as Reviewed by Alexander Dela Fuente


                I like playing videogames very much; they provide me with a means for both performing actions and experiencing situations which are inaccessible in real life. Imagine, then, my personal concern whenever an outcry ensues over the supposed negative effects of videogames on real-world existence; however, this concern often abates whenever I snoop around for the latest research on gaming’s positive effects. Tonight, however, I find that such comfort may momentarily be out of reach; tonight, science seems to side with the naysayers and the alarmists. Damn them for their sensationalist ways! Damn them!
                
                 Perhaps that may’ve been premature; I’ve yet to share the sample of disagreeable science to which I’ve just referred. The work, entitled The effects of videogame realism on attention, retention, and aggressive outcomes, is credited to Marina Krcmar, Kirstie Farrar, and Rory McGloin; the former hails from Wake Forest University, the latter two from the University of Connecticut. As the title so eloquently puts it, this scientific study is focused on the link between videogames and aggression; while I am disinclined to favor the existence of such, my preliminary outburst suggests I may not be treating the topic with the free-from-bias viewpoint it, like all children of science, deserves. In attempting to set such a bias aside, I’ve come to recognize certain interesting, if not downright positive, motions set forward by the research study. First and foremost is the focus on both attention and retention in proceeding with the study; without either, I would not be able to review it for this Psychology 135 (Sensory and Perception) blog. Second is the premise, as I understand it: that videogames, as they progress in successfully presenting an experience that approaches the tactile essence of reality without being bound by its legal boundaries, may simulate situations so accurately as to elicit the chemo-physiological reactions appropriate to actual occurrences. Third is the methodology, insofar as I find it incredibly amusing, on a personal level, to read that the researchers chose to represent increasing success in reality-simulation by selecting connected games from the same franchise – not just any franchise, mind you, but Doom itself! This choice, at least to me, exemplifies the pragmatism which I believe good researchers must possess.

                What were the variables? Ah, Mister Guidequestions, the level of realism, as dictated by the iteration of Doom being played by the participants, was the independent variable; the resultant levels of both verbal and physical aggression – as extrapolated from standardized tests and, thankfully, not field observation – was the dependent variable.
                 
                What was the methodology? As I understand it, participants were first randomly assigned to a condition then taught how to play the game. Those in the experimental condition were allowed to play before being deceived into believing they would not receive full credit for their participation; they were given a means to vent any frustration at this mix-up by filling up an assessment form regarding the professor who was responsible for the experiment and, ostensibly, the credit difficulties. Those in the control condition took the same steps, albeit with the deception and the form-filling preceding playtime. In the interim between the enumerated steps, questionnaires regarding demographics, video game experience, attention, and other related fields of concern were distributed.

                Who were the participants? As the journal article tells me, a total of 130 undergraduates with an average age of 19.6 years were recruited.

                What are your comments and criticisms? Well, Mister Guidequestions, I sometimes wonder if I’m too old to be writing in so trite and awkward a manner such as this; surely I’m above using an imagined interviewer to frame my thoughts, right? Oh, you meant about the journal article! Well, I really, really liked it, as far as the science was concerned. I liked how the researchers relied on the presumed similarity between same-genre games from the same franchise; it felt like a wiser allocation of time and resources, presuming no copyright laws were infringed upon. The use of deception is always amusing to behold, as is the division of experiment steps into interchangeable modules for ease of condition differentiation; I absolutely love it whenever researchers combine ease of set-up with a justifying avoidance of confounding variables.

                At this point, you may be wondering what happened to my initial animosity. It’s still there, raging at the focus on aggression. Why aggression? Sure, I know the literature supports a link between violent video games and aggression, but come on! Of course violence is linked to aggression! Of course games about shooting people/facsimiles-of-people are violently linked to aggression! Give me a break, please.

                What I want to see is either a study on the link between First-Person Shooter games and some other character trait – I don’t know what, maybe individuality –, or a comparative study of game genres when related to aggression. For example, I personally enjoy Real-Time Strategy Gaming; does ordering my in-game army to shoot at people make me more aggressive than when I do the shooting myself on a First-Person Shooter? I do wonder.

                One final note on this excessively lengthy piece: How often do you read lines as absurd as the following in scientific journals?

Not often.

FOCUS ME NOT By: Ace de Guzman Ligsay

-PRIDE-
We march around the PHAN Building as if we are the best undergraduate students of the college. We bear in mind that we have the crown of owning everything that other CSSP students do not have-own building, own colorful and air conditioned classrooms, own parking space and own yearbook. But we forget, intentionally or unintentionally, as students of the best Psychology undergraduate program in the country, to start awareness campaigns regarding abnormal psychological conditions. This is the Pride we should have and carry on- catalyst of social unconditional acceptance to the psychologically challenged individuals.






FOCUS ME NOT- Exploring the Visual Attention Aberrations of our Autistic Friends





Textbook describes autism as a neural development disorder characterized by withdrawal of contact from other people, repetitive actions and difficulty telling what emotions others are experiencing in social situations. According to Grelotti, there is a huge difference in both behavior and brain processing between autistic and non-autistic people. As highlighted by researches of Ami Klin (2003) autistic people cannot solve reasoning problems when put in actual social situations. Psychological science agrees that autistic people perceived the environment differently than normal observers. Autistic people focus more on random things, while non-autistic people can read facial expressions, detailed movements especially minute movements of the face and the eyes. Kevin Pelphrey and coworkers (2005) attempted to measure brain activity in the superior temporal sulcus, an area responsible on how other people direct their gaze in social situations. This study yielded a conclusion that the difference may have to do with how observers interpret people’s intention (Goldstein, 2007)






As established earlier, individuals with autism tend to focus excessively on certain stimuli that are irrelevant to a non-autistic observer. A study of Townsend (1996) provided an evidence for slowed spatial orienting of attention in autism. They tested a group of autistic subjects and age-matched normal controls performed a traditional cueing task in which attention-related response facilitation is indexed by speed of target detection using a traditional task that depended on accuracy of response (target discriminations) rather than speed of response. According to the researchers, this experimental set-up let the division of time to process and responds to target information from the time to move and engage (orient) attention. The results were consistent with the previous observations that patients with autism were slow to shift attention between and within modalities.


According doctor-colleagues Plaisted and Davis, Children with autism exhibit typical attentional modulation of static information, but have difficulty with modulating dynamic information. For example, they find it hard to see sudden shift of colors, sceneries and facial expressions in their favorite Dora the Explorer TV show


Autistic people find themselves jailed in a world of pieces. It is so strange and confusing how a person is unable to shift gaze quickly and recognize emotions, and minute movements. Social interactions of autistic individuals are limited to what they can see, to what they can comprehend. We can attribute their problems on visual attention and  slowed spatial orienting of attention to their withdrawal from our world. Slow gaze shifting theory explains the inability of autistic individuals to comprehend and appreciate the complex system of social interactions by missing out important nonverbal communication cues like facial expressions, eye movements and fast facial movements.


As Psychology majors, we are in the forefront of raising awareness regarding autism.
Autism Awareness Campaign Video from YOUTUBE. Chek this out:)
http://www.youtube.com/watch?v=1VA6Q3vTC_o&feature=fvw


Sources:


Grelotti, D.J., Klin, A.J., Gauthier, I., & Schultz, R.T. (2002). Social interest and the development of cortical space face specialization: What autism teaches us about face processing. Developmental Psychobiology, 40, 213-225.


Goldstein, E. (2007). Something to Consider: Attention in Autism. Sensation and Perception, 148-149.


Klin,A,. Jones, W., Schultz, R., & Volkmar, F. (2003). The enactive mind, or from actions to cognition: Lessons from autism. Philosophical Transactions of the Royal Society of London B, 345-360


Plaisted, K.C. (2001). Reduced generalization: An alternative to weak central coherence. In: Burack, J.A., Charman, A., Yirmiya, N., & Zelazo, P.R. (Eds).Development and Autism: Perspectives from theory and research. Lawrence Erlbaum Associates, New Jersey.




Pelphrey, K.A., J.P., &McCarthy, G. (2005). Neural basis of eye gaze processing deficits in autism. Brain, 128, 1038-1048


Townsend J, Harris NS, Courchesne E (1996). J Int Neuropsychol Soc. 1996 Nov;2(6):541-50.
http://www.ncbi.nlm.nih.gov/pubmed/9375158