About Keith

Keith Brawner currently works in the simulation industry for the DoD, before, during, and after getting a Masters in Intelligent Systems. Sadly, he is not yet a Doctor.

Sunday, October 31, 2010

Affect and Trust

Affect and Trust was written out of the College of Information Science and Technology in Drexel University, PA, by Lewis Hassel.  It can be found for free online here.


Life
It has been an exciting week, with a work party, Halloween party, Halloween outing (and old movie night) at the Enzian, and tonight's actual Halloween.  I think that with a party, movie, trick-or-treating, and Halloween Horror Nights, we may have bled this year dry.  Career-wise, RDECOM (now a subcomponent of ARL) has opened a position for me, and I have applied.  I can't wait to get my hands dirty with a research organization.  Of course, I am still reading Building Intelligent Interactive Tutors.

Issue
Hassel argues that there is a very distinct difference between trust and belief.  In his model, trust is based on action, while belief is based cognitively.  The example that shows this the most clearly is in the person falling backward.  He may believe that the other person is going to catch him, which is based logically on evidence that the other person is strong, not likely to want to inflict him harm, in close proximity, etc.  However, Hassel argues that he does not trust the person unless he actually falls.  While we may believe that we trust someone, we do not until the actual action is taken.

When do we trust someone?
Bos et al showed in 2002 that we tend to trust people more when meeting them face to face
Zheng has showed that we can trust people we've never met just as well, but that it takes long, and that just seeing a picture of the person is immensely helpful.  In fact, it is more helpful than seeing a datasheet of the person.

How do we model it?
note - PEU is Perceived Ease of Use
Each of the terms here is defined in the paper, as is each numbered path


Conclusion and Why do you care? 
It is important to know why you trust someone and why they trust you.  As we develop more advanced models of the phenomenon, we understand it better.  As we understand it better, we learn how we can trust others more, and how we can build our own trust with others.

Sunday, October 24, 2010

Rant - AI in Education - Building Intelligent Tutors

I am currently reading Building Intelligent Interactive Tutors: Student-centered strategies for revolutionizing e-learning.  You can buy the book on Amazon here.  For, you can download it, say, for an ebook reader, here.

Life
Super-awesome weekend.  We went to Halloween Horror Nights at Universal Studios to see movie-quality, real life, monsters that jump out at you in a number of haunted houses (8? 10?), combines with the great rides of a world-class theme park.  In addition to that splurge of entertainment, we went to La Nouba, and were thoroughly entertained by live performers for over an hour.  In a few hours, we will be attending a block party.  So, please forgive the late update.


Breaking points
     I've been reading Building Intelligent Interactive Tutors by Beverly Park Woolf, and one of the things that she speaks of often is the idea that each industry reaches a critical threshold on occasion.  For instance, the field of computer science benefits from object oriented programming/design.  The field of physics has made leaps and bounds based on the models that they can now create via computer simulation.  She argues that the field of education is now overdue for such a breakthrough for a few reasons.  In fact, just this month this subject was a featured article on the technology site Slashdot.  You can read more here.


Why now?

In the past field of education, learning has been studied, and segmented into a few categories:
  • one-on-one instruction versus group (one-on-one is significantly more effective)
  • inquiry learning versus lecture learning (inquiry is more effective)
  • Testing versus teaching (tests can make ability gaugeable, but the time is better spent teaching if you already know the ability level)
  • motivational learning versus subject learning (students learn better when motivated)
  • Mastery learning (building a subject from the ground up, and asking 'why?') outperforms other forms of learning.
Logically speaking, you want a one-on-one teacher that teaches via asking questions (or better yet, getting students to ask the right questions), without any tests, in a subject that the student is interested in.  I can see you rolling your eyes at this.  Despite knowing that these teaching methods are the most effective, they are also the most difficult to implement.  Having one first grade teacher per student is ludicrous, and attempting to get them to sit still long enough to actually ask questions about subject matter isn't exactly realistic.
Or is it?
There is an obvious exception to this, however, and my reader likely sees it coming.  Intelligent Tutoring Systems offer the real promise of optimal learning.  With each of these subject-area improvements, you can make leaps and bounds with performance.
  • ITS's can tutor one-on-one, and are best this way
  • ITS's can teach via inquiry learning, either by providing a large number of questions, or by grammar-parsing text-written (or spoken) response
  • An ITS has no real need to test.  When working a domain like mathematics, it can assign homework problems that are graded on-spot.
  • ITS's can gauge student involvement as well as or better than a live tutor, using sensors
  • ITS's can use Mastery Learning if constructed in the correct manner by an expert (say, a grade school teacher).

Why do you care?
There is a strong case to be made that the students of the future will be taught via a computer interface that is customized to their needs.  It will keep track of their learning on various subjects, get their interest and keep it, and get them to ask questions about the subject matter.  It is likely that it will be able to be distributed via Internet, and that a large portion of mankind will be bettered by it.  People in first world countries will be getting the same education that a significant portion of the planet is getting.
There are still some important problems to solve (for instance, all of the above), but it is likely to be only a matter of time before they can be taken care of.

Friday, October 15, 2010

Predicting Searcher Frustration

Predicting Searcher Frustration was written out of the University of Massachesetts by Feild, Allan, in conjunction with Jones from Yahoo!.  It can be found for free online here.

Life
The plan for this weekend is to go camping and enjoy some time out-of-doors.  There are plans for s'mores, canoeing, and hiking, in addition to whatever else strikes our fancy.

Problem
Search engines, like all businesses, are striving to be better in order to claim more market shares, ad revenue, and viewers.  As part of this effort, one of the things that they (or at least Yahoo!) are looking into is predicting when users are getting tired/frustrated in looking for data.  If they detect that a specific user is frustrated, then presumably they could make the interface better, give different results, give a different category of results, or simply mark it as an area for future improvement.


Experiment
What we are going to do here is make a bunch of users go on a scavenger hunt for information, and report how they feel about it.  This will be measured in a couple of different ways:
  • Query Logs - including page focuses, clicks, navigation, mouse movements, etc. (47 features in total)
  • Sensor Data - including a mental state camera, pressure sensitive mouse, and pressure sensitive chair
    • mental state camera has 6 states - agree disagree, unsure, interested, thinking, confident
    • mouse has 6 pressure sensors - 2 on top, 2 on each side
    • chair has six sensors - 3 on back, 3 on seat

Results
As you can see from the right, there are a few important conclusions:
Why do you care? 
1 - Search engines are getting better, and user modeling is likely to play a role in this in the future
2 - Direct sensor data is not required in order to predict how you are feeling (your webpage views alone are more accurate)

Friday, October 8, 2010

“Yes!”: Using Tutor and Sensor Data to Predict Moments of Delight during Instructional Activities

“Yes!”: Using Tutor and Sensor Data to Predict Moments of Delight during Instructional Activities was written out of Arizona State by Muldner, Burleson, and VanLehn.  It can be found for free online here.

Life
Nothing particularly fancy is going on.  The combined Regular Day Off and Columbus Day have granted a glorious 4-day weekend.  This combined with today's high of 85 will make for a relaxing weekend of light reading, sushi, and entertainment.

Career
I/ITSEC, the international conference on modeling and simulation will be in Orlando next month.  It will be a good time, offering everything from paper presentations on how to train nurses, to the latest advancements in computer graphics, to 3 dimensional printing, to firing grenade launchers (mockups with recoil) at insurgents (simulated).

Problem

The AI in Eduation field, in general, has been focusing on trying to keep students 'in the zone', or to keep them from getting frustrated through the use of hints, easy questions, scaffolding, or a number of other methods.  However, this paper postulates that the most important moments in education are the "yes!" moments or great success.  However, we don't know how to detect these moments and may interfere with their occurrence.


"Yes!"
Most everyone has had a "yes!" moment in their education.  If you think, this is the moment where you had a sudden realization of a concept, or when you had just answered a particularly difficult problem.  This moment probably involved significant work with sudden reward.  As my high school calculus teacher would say "These are the moments when the student finally understands, and the moments I live for."

Experiment
"Yes!" data was gathered using the Example Analogy (EA) Coach with Newtonian physics.  Interactions with the interface were recorded, and students were asked to think aloud.  Additionally, a posture chair, skin-conductance bracelet, eye tracker, and pressure mouse were used to gather data on the students current state.  The "yes!" moment was labeled by an expert, and the system trained to recognize the occurrence based upon that information.

Results
Posture Chair Data
Logistic regression was used to attempt to make sense out of the sensor data.  As has been found in other studies (such as Automatic prediction of frustation), the data from the posture chair was not particularly usable.  However, through the use of time-based models which included pupil response and imput from the other sensors, they were able to correctly predict 60% of the "yes!" events, while incorrectly predicting non-"yes!" events 13% of the time.  Obviously there is some work left to be done in the field, but these results are promising and show possibility.

Why do you care? 
1 - ITS systems can keep students optimally challenged if they are reporting a high frequency of "yes!" events.  This is just as important, if not more important, than predicting frustration.
2 - Detection of such events is possible, and should be further investigated.

Friday, October 1, 2010

Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School

Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School was written out in National Central University (Taiwan) by Chang, Lee, Chao, Wang, and Chen.  It can be found for free online here (page 18)).


Life
Mostly just chilling at home, and hanging out with friends (trivia, D&D, dinner), and going to work.  All is clear on this eastern front.  Oh yea, and we've been goofing around with the Sony 900BC eReader, which is proving itself useful for reading Dune, Ghost in the Shell, and, of course, more research papers!


Problem

This paper is unusual in the sense that it doesn't actually propose a problem.  The problem is well studied: people learn foreign languages poorly.  With that said, we will study the availability of specific tools to tackle it.  Today's tool: Robots.


Experiment
Our research teacher in high school was fond of saying that you are not to say "I hypothesize that RAID will kill roaches".  Instead, you are to say "I hypothesize that the addition of RAID to roaches will have a measurable effect with regard to activity, food intake, etc.".  For those of us watching carefully, all we said there was that we thing something will happen, and will measure some stuff to see if something happened.  Today we think that robots will have a net benefit effect on the classroom.

Why and How?
Robots have some of the benefits listed to the left, with regards to teaching.  The researchers here made 5 modes of operation in order to attempt engagement:
  • Storytelling - robots tells a story in a foreign language, complete with different voices for characters
  • Oral Reading - Robot reads a printed story aloud, and calls upon children to help it
  • Cheerleader - Robot encourages students to participate in games, and does dances when students get the answers right
  • Action Command - Robot plays Simon Says with the students
  • Question-and-answer - Robot talks to students, asks them to introduce themselves, introduces itself, etc.  Robot plays the role of a foreign persona (male or female)
Results
When there is a robot in the class (doing the above):
  • Students respond more loudly, speak more often, ask more questions, listen quieter, and watch the robot intensely (according to teacher survey)
  • Shier students interact more, while more outgoing students interact for longer periods of time
  • Teachers reported offloading of teaching as a robot can perform roles of either sex while the teacher is limited to one, mostly.
  • Teacher had additional time to work with poorer students while everyone was distracted with a robot. 
Issues
  • They report lack of training
  • Complicated technology 
  • Decaying motivation
  • Robot shows no emotion
Conclusion:
Robots are awesome.  When students are talking to a robot they are more engaged, and talk/practice their language skills more often.  This technology can be coupled with Affective Tutoring in order to further help the learner.

Why do you care? 
Coming in at a whopping price of $250 (or roughly 5 textbooks), we may see the involvement of robotic helpers in the school system relatively soon.  These advances, coupled with the advancement of image/face recognition could result in genuine custom tutoring available for relatively cheap.