About Keith

Keith Brawner currently works in the simulation industry for the DoD, before, during, and after getting a Masters in Intelligent Systems. Sadly, he is not yet a Doctor.

Wednesday, April 27, 2011

Blog down! Blog down!

As you will no doubt have notice after I point it out, there hasn't been any activity on this blog for quite some time.

My current projects are:
1 - Continued work for the for the Army Research Lab: 5+ pending publications this year and counting, not to mention 3 studies, and 10 months left in the year.
2 - Pursuit of PhD and development of AI programs, currently in the realm of unsupervised student state assessment, and future state predictions from sparse tagging.
3 - Development of an iPad app with my cousin, Michael.
4 - Travel to Europe in 2 weeks.
5 - Learning financial literacy for extreme saving and early retirement.  My wife and I should approach the 50% total savings rate next month.

I am only one man, and the blog which produces no income, provides no writing outlet (see published papers, and 2 lit review this summer),  and costs energy/time is just no longer worth it.

I may return some day, update my excuse list, but it is probably true that I can generally occupy my own time.

Tuesday, December 28, 2010

Self discovery enables robot social cognition: Are you my teacher?

"Self discovery enables robot social cognition: Are you my teacher?" was written by Kaipa, Bongard, and Meltzoff with the Universities of Vermont and Washington.  It can be found for free online here.


Life
First, sorry for the lack of blog posts.  I've not stopped reading papers, but the motivation to post things is low.  Additionally, my time is coming at a premium during the holiday season.  However, I didn't say that I was going to stop posting for a while, so I'm kind of a jerk.

Christmas was excellent, and I got to spend some time with my family and loving wife.  She is the best Christmas gift ever, besides maybe a briefcase.

Problem/Background
As they say, imitation is the finest form of flattery.  However, the question for a robot is who to imitate.  Work done in the field of robotics on how to learn from watching has been performed by Breazeal/Scassellati (2002) and Dautenhan/Nehaniv(2002), but the question of who to imitate is currently unsolved.  In 1997, Schaal showed 30 seconds of pole balancing video o a robot, and this was shown to increase learning rates.  This was one of the earlier experiments in this field, with more work done each year.  The paper does a great job of summarizing.



Experiment
The goal in this experiment is to have a mobile robot:
  1. Figure out what joints it has, and how it is able to move
  2. Figure out what other robot is similar enough to make a good teacher
  3. Figure out what individual actions imitate the teacher
  4. Actually perform said imitation
 
Self Discovery
There has been prior work done in self-discovery.  The robot is told of the part that make it up: how much each joint can move, and the shape/size/density of the materials that make it up.  The robot discofers possible position combinations through the technique used in Resilient Machines Through Continuous Self-Modeling (Bongard/Zykov/Lipson(2006)).
 
Self Modeling
A hill climbing algorithm is used to search the space of self models after all of the min/max parameter values have been found.  The self model is developed using a single genome.  This is then mutated (random -directioned Gaussian shift) and evaluated for error.  Lower error models are kept while higher error models are thrown away.

Teacher Modeling
Left and right cameras were used to capture data of a teacher robot using the process that Kaipa/Bongard/Meltzoff used in Combined Structure and Motion Extraction from visual data using Evolutionary Active Learning (2009).
 
Teacher Imitation
A 3x2 neural network is used to control the output of the student motors.  The output of the teacher is mapped to a hypothetical output of the student, and the network weights are trained.  The question of which of the teacher nodes map to the student nodes is solved during error reduction function.  It is worthwhile to note that because of the error reduction model, the student never performs the exact maneuver that the teacher does.

Future Work (and why you care)
Very simple models were used here, but many of them were used in series in order to perform the task.  Whenever the choice of optimization was presented, hill climbing was used.  Simple joints were used, and only two of them at that.  Simple cameras, teacher, etc. were used.  However the conclusion that a robot which doesn't even know it's own body may be able to learn from another robot or human teacher is a powerful conclusion.

Saturday, November 6, 2010

Neural Network Applications in Ship Research with Emphasis on the Identification of Roll Damping Coefficient of a Ship

Neural Network Applications in Ship Research with Emphasis on the Identification of Roll Damping Coefficient of a Ship was written out in Glasgow, Scotland's Department of Naval Architecture and Marine Engineering as a postdoctoral research paper by Mukhtiar Ali Unar.  It can be found for free online here.


Life
Still having a good time.  Still trying to get a job with the Army.  Also, my team sucks at trivia (we came in dead last, but regularly take 2nd or 3rd in the Thursday night games.
Problem
This paper is mainly a survey paper, but in brief, the main problem was to predict hydrodynamic coefficients for ship vessels.  The hydrodynamic coefficients are used to determine exactly how to execute the appropriate maneuver.  Due to a large number of factors, these cannot be directly measured and have thus far yielded to the 'best guess' of the people who are driving the boat.  In theory, if we were able to model the parameters better, we would be able to build safer and more reliable ships. 

Background
Artificial Neural Networks (ANNs or NNs) are a computational construction of a function that has many inputs and outputs.  It is a way to represent a problem as a 'black box', where the internals are unknown, but the inputs and outputs known.  The output of a neuron is the sum of the inputs to it, multiplied by their corresponding weights.  These weights can be 'trained' (adjusted in small increments) until they correctly model a given function.  It is represented to the right.

With that said, it is proven that any linearly separable function can be modeled by a Multi-Layer Perceptron (MLP) Neural Network (below).  It is represented below.




 








NNs have been in use (in research) for naval application since 1998 or so, with over 100 papers published using NNs in 2005.  They are a reasonably standard way of representing an unknown problem.


Naval Uses
  • Ship design (length, breadth, speed, draft, depth, displacement) - Clausen (2001)
  • Stability parameters (shipping vessels) - Alkan (2004)
  • Hull weight estimation - Wu (1999)
  • Estimation of wave induced ship hull bending - Xu/Haddarra (2001)
  • Automatic hull form generation - Islam (2001)
  • Control (steering, rudder roll, fin stabilization, collision avoidance, path following) - Various
  • Classification (filtering radar picture, marine acoustic signals, wake detection, ship trail clouds) - Various
  • Prediction of waves, tidal levels, storm surges, coastal water levels, and ocean currents - Various (1994-2007)

Experiment
So the goal here was to estimate ship parameters.  For this, a ship controller was constructed and fed data from a centralized database.  It was the job of the NN to stabilize the ship.  The NN error was charted, but not in a meaningful way (sorry Dr. Unar), as it seems like this paper was written for funding, rather than results.

 
Why do you care? 
Let's be honest: you probably don't.  The important takeaway here is that NNs are being used well outside of the traditional domain of artificial intelligence.  Although developed as an application of Machine Learning methods, they have now made their way into fairly mainstream applications.  Just remember that when you are looking at what a predicted storm surge for an area is, or how your semi-automated ship is turning, you are seeing the fallout of AI.

Sunday, October 31, 2010

Affect and Trust

Affect and Trust was written out of the College of Information Science and Technology in Drexel University, PA, by Lewis Hassel.  It can be found for free online here.


Life
It has been an exciting week, with a work party, Halloween party, Halloween outing (and old movie night) at the Enzian, and tonight's actual Halloween.  I think that with a party, movie, trick-or-treating, and Halloween Horror Nights, we may have bled this year dry.  Career-wise, RDECOM (now a subcomponent of ARL) has opened a position for me, and I have applied.  I can't wait to get my hands dirty with a research organization.  Of course, I am still reading Building Intelligent Interactive Tutors.

Issue
Hassel argues that there is a very distinct difference between trust and belief.  In his model, trust is based on action, while belief is based cognitively.  The example that shows this the most clearly is in the person falling backward.  He may believe that the other person is going to catch him, which is based logically on evidence that the other person is strong, not likely to want to inflict him harm, in close proximity, etc.  However, Hassel argues that he does not trust the person unless he actually falls.  While we may believe that we trust someone, we do not until the actual action is taken.

When do we trust someone?
Bos et al showed in 2002 that we tend to trust people more when meeting them face to face
Zheng has showed that we can trust people we've never met just as well, but that it takes long, and that just seeing a picture of the person is immensely helpful.  In fact, it is more helpful than seeing a datasheet of the person.

How do we model it?
note - PEU is Perceived Ease of Use
Each of the terms here is defined in the paper, as is each numbered path


Conclusion and Why do you care? 
It is important to know why you trust someone and why they trust you.  As we develop more advanced models of the phenomenon, we understand it better.  As we understand it better, we learn how we can trust others more, and how we can build our own trust with others.

Sunday, October 24, 2010

Rant - AI in Education - Building Intelligent Tutors

I am currently reading Building Intelligent Interactive Tutors: Student-centered strategies for revolutionizing e-learning.  You can buy the book on Amazon here.  For, you can download it, say, for an ebook reader, here.

Life
Super-awesome weekend.  We went to Halloween Horror Nights at Universal Studios to see movie-quality, real life, monsters that jump out at you in a number of haunted houses (8? 10?), combines with the great rides of a world-class theme park.  In addition to that splurge of entertainment, we went to La Nouba, and were thoroughly entertained by live performers for over an hour.  In a few hours, we will be attending a block party.  So, please forgive the late update.


Breaking points
     I've been reading Building Intelligent Interactive Tutors by Beverly Park Woolf, and one of the things that she speaks of often is the idea that each industry reaches a critical threshold on occasion.  For instance, the field of computer science benefits from object oriented programming/design.  The field of physics has made leaps and bounds based on the models that they can now create via computer simulation.  She argues that the field of education is now overdue for such a breakthrough for a few reasons.  In fact, just this month this subject was a featured article on the technology site Slashdot.  You can read more here.


Why now?

In the past field of education, learning has been studied, and segmented into a few categories:
  • one-on-one instruction versus group (one-on-one is significantly more effective)
  • inquiry learning versus lecture learning (inquiry is more effective)
  • Testing versus teaching (tests can make ability gaugeable, but the time is better spent teaching if you already know the ability level)
  • motivational learning versus subject learning (students learn better when motivated)
  • Mastery learning (building a subject from the ground up, and asking 'why?') outperforms other forms of learning.
Logically speaking, you want a one-on-one teacher that teaches via asking questions (or better yet, getting students to ask the right questions), without any tests, in a subject that the student is interested in.  I can see you rolling your eyes at this.  Despite knowing that these teaching methods are the most effective, they are also the most difficult to implement.  Having one first grade teacher per student is ludicrous, and attempting to get them to sit still long enough to actually ask questions about subject matter isn't exactly realistic.
Or is it?
There is an obvious exception to this, however, and my reader likely sees it coming.  Intelligent Tutoring Systems offer the real promise of optimal learning.  With each of these subject-area improvements, you can make leaps and bounds with performance.
  • ITS's can tutor one-on-one, and are best this way
  • ITS's can teach via inquiry learning, either by providing a large number of questions, or by grammar-parsing text-written (or spoken) response
  • An ITS has no real need to test.  When working a domain like mathematics, it can assign homework problems that are graded on-spot.
  • ITS's can gauge student involvement as well as or better than a live tutor, using sensors
  • ITS's can use Mastery Learning if constructed in the correct manner by an expert (say, a grade school teacher).

Why do you care?
There is a strong case to be made that the students of the future will be taught via a computer interface that is customized to their needs.  It will keep track of their learning on various subjects, get their interest and keep it, and get them to ask questions about the subject matter.  It is likely that it will be able to be distributed via Internet, and that a large portion of mankind will be bettered by it.  People in first world countries will be getting the same education that a significant portion of the planet is getting.
There are still some important problems to solve (for instance, all of the above), but it is likely to be only a matter of time before they can be taken care of.

Friday, October 15, 2010

Predicting Searcher Frustration

Predicting Searcher Frustration was written out of the University of Massachesetts by Feild, Allan, in conjunction with Jones from Yahoo!.  It can be found for free online here.

Life
The plan for this weekend is to go camping and enjoy some time out-of-doors.  There are plans for s'mores, canoeing, and hiking, in addition to whatever else strikes our fancy.

Problem
Search engines, like all businesses, are striving to be better in order to claim more market shares, ad revenue, and viewers.  As part of this effort, one of the things that they (or at least Yahoo!) are looking into is predicting when users are getting tired/frustrated in looking for data.  If they detect that a specific user is frustrated, then presumably they could make the interface better, give different results, give a different category of results, or simply mark it as an area for future improvement.


Experiment
What we are going to do here is make a bunch of users go on a scavenger hunt for information, and report how they feel about it.  This will be measured in a couple of different ways:
  • Query Logs - including page focuses, clicks, navigation, mouse movements, etc. (47 features in total)
  • Sensor Data - including a mental state camera, pressure sensitive mouse, and pressure sensitive chair
    • mental state camera has 6 states - agree disagree, unsure, interested, thinking, confident
    • mouse has 6 pressure sensors - 2 on top, 2 on each side
    • chair has six sensors - 3 on back, 3 on seat

Results
As you can see from the right, there are a few important conclusions:
Why do you care? 
1 - Search engines are getting better, and user modeling is likely to play a role in this in the future
2 - Direct sensor data is not required in order to predict how you are feeling (your webpage views alone are more accurate)

Friday, October 8, 2010

“Yes!”: Using Tutor and Sensor Data to Predict Moments of Delight during Instructional Activities

“Yes!”: Using Tutor and Sensor Data to Predict Moments of Delight during Instructional Activities was written out of Arizona State by Muldner, Burleson, and VanLehn.  It can be found for free online here.

Life
Nothing particularly fancy is going on.  The combined Regular Day Off and Columbus Day have granted a glorious 4-day weekend.  This combined with today's high of 85 will make for a relaxing weekend of light reading, sushi, and entertainment.

Career
I/ITSEC, the international conference on modeling and simulation will be in Orlando next month.  It will be a good time, offering everything from paper presentations on how to train nurses, to the latest advancements in computer graphics, to 3 dimensional printing, to firing grenade launchers (mockups with recoil) at insurgents (simulated).

Problem

The AI in Eduation field, in general, has been focusing on trying to keep students 'in the zone', or to keep them from getting frustrated through the use of hints, easy questions, scaffolding, or a number of other methods.  However, this paper postulates that the most important moments in education are the "yes!" moments or great success.  However, we don't know how to detect these moments and may interfere with their occurrence.


"Yes!"
Most everyone has had a "yes!" moment in their education.  If you think, this is the moment where you had a sudden realization of a concept, or when you had just answered a particularly difficult problem.  This moment probably involved significant work with sudden reward.  As my high school calculus teacher would say "These are the moments when the student finally understands, and the moments I live for."

Experiment
"Yes!" data was gathered using the Example Analogy (EA) Coach with Newtonian physics.  Interactions with the interface were recorded, and students were asked to think aloud.  Additionally, a posture chair, skin-conductance bracelet, eye tracker, and pressure mouse were used to gather data on the students current state.  The "yes!" moment was labeled by an expert, and the system trained to recognize the occurrence based upon that information.

Results
Posture Chair Data
Logistic regression was used to attempt to make sense out of the sensor data.  As has been found in other studies (such as Automatic prediction of frustation), the data from the posture chair was not particularly usable.  However, through the use of time-based models which included pupil response and imput from the other sensors, they were able to correctly predict 60% of the "yes!" events, while incorrectly predicting non-"yes!" events 13% of the time.  Obviously there is some work left to be done in the field, but these results are promising and show possibility.

Why do you care? 
1 - ITS systems can keep students optimally challenged if they are reporting a high frequency of "yes!" events.  This is just as important, if not more important, than predicting frustration.
2 - Detection of such events is possible, and should be further investigated.