aleader-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Open Heart Logic (was Re: [Aleader-dev] Open Heart)


From: William L. Jarrold
Subject: Re: Open Heart Logic (was Re: [Aleader-dev] Open Heart)
Date: Sun, 26 Oct 2003 23:59:50 -0600 (CST)


On Wed, 22 Oct 2003, Joshua N Pritikin wrote:

> [This is a quick overview of design issues which I
> wrote up over the last 1-2 days.]
>
> I am ready to get started, but there are some important
> design decisions which should get resolved at the beginning.
> Your opinion is needed because I am relatively weak in
> statistical inference and ontological engineering.
>
> I played with Open Mind (Common Sense).  I also looked at
> Mindpixel.  Technically, these kind of sites are really
> easy to implement (assuming we don't need natural language
> parsing).  I bet I can do the basic stuff in 2-5 months.
>

Hmm, I don't know Mindpixel.....I took a look at
http://www.mindpixel.com/.  Just saw lotsa stuff about iraq
not too much about AI.  Saw some other hype.  A lofty quote
from Henry Lieberman.  Some pictures of the brain with pretty
colors.  I guess I'll have to dig deeper someday once I get
my email inbox below 700.  (It is back up to 1100 now...)-:)

> Entering knowledge in Open Mind is somewhat boring.  It is
> somewhat more fun to compare my entries with related knowledge
> which other people have entered.  In section 4.1.5 of your
> thesis, you mention something about this in "assignment of
> participants to conditions" (p. 71).  I do not completely
> understand the issues.  Even if we try to test people
> individually, I am skeptical that we can prevent people
> from seeing each others' question/answers.

Huh?  Why not?  All people will do it click on a 1-5
rating for "please rate the believability of the
scenario you are looking at."...If they wanna see what
others have rated, a program will force them to be good
compliant boys and girls and not give them the "reward"
of seeing how other people rated these things, until AFTER
they are done.

>
> In other words, what is our statistical methodology?
>

A continuing unfolding scam.  After all, there are lies,
damn lies and statistics...No, seriously...see the answer
to the next question...

>
> Can we assume a single methodology or are many different
> statistical approaches possible?

It is very likely that many different statistical approaches
are possible.

I think we can hack our way through the first 100 survey responses
just to learn more about the issues.  As Mr Bottom Up Man, I am
a firm believer in learning via experience.  Plus, it allows me
to indulge procrastination.

>
> What I imagine is that we'll ask a question like:
>
>   Tracy wants a banana.
>   Mummy gives Tracy an apple.
>   ->
>   Tracy is sad because she wants a banana.
>
>   Believable?  (Yes) (Somewhat) (Not really) (No)

*Exactly*....Except for the minor detail that I think it is
better to have a Likerat rating scale.  That is,
"Please rate the believability of the above scenario on a scale
from one to five"....There should be an example in the dissertation.
If not or if you wanna see more examples, I can send you a survey soon.

>
> Then we'll show the user stats about how many people voted
> for each evaluation, adjusting for ablation.  (?)

Ugh.  More or less.  To really get that across how that works
will take a long time (not to mention a much better understanding
of stats than I have now).

Hmm, should I attempt to explain statistical inference to you?

>
> What other testing formats do we want to accomadate?
> A good prediction about this can save us redesign later.

Hmm, not sure.  I mean, I am sure we will eventually want
other testing formats than items like in my dissertation.

>
> I think we can avoid running the computer model in real-time.
> All the questions and predicted answers can be computed and
> translated to English by a nightly cron job.

Okay.  Sure.  Certainly for starters, definitely.

>
> We can automatically notify users via email when the model's
> opinion changes vs. their opinion.  Will we allow people
> to change their answers?

Maybe.  We'll probably want to be able to do it both ways...
I guess, we kinda want to control the order in which items are seen.  That
way we can counterbalance (and test) for item-order effects...But
if preventing them from changing their mind is a big hassle, lets
not worry about it.

Hmm, I am seeming so non-committal about everything.  And this
is perhaps a function of my bottom up approach.  I hope I am not
being too bottom uppy here....But I hate software design documents.

> Should we keep record of all
> answers (old & new)?

Yes!  I never throw anything away....But seriously, people time
is expensive.  Memory is cheap.  And getting cheaper by the day.
Therefore, we can not afford to delete our data.  e.g. it is a waste
of our resources deciding what to delete and what not to.  Just
save it all and we can "grep" for whatever we want to use.

>
> I want to include multiple KR models, but it would be cool
> if there was maximum compatibility between the models.  For
> example, my pronoun-desirability questions should be
> a compatible extension of your simple desirability.
> On the other hand, I want the KR interface as separate as
> possible from the web stuff so that we can theoretically
> plug-in something unique like ThoughtTreasure without
> architectural indigestion.

Yes, I agree with the above paragraph exactly.  If our models
are not compatible, then we will get minimum reusability.

>
> I anticipate that our database will grow rather slowly
> compared to Open Mind since we are much more vigilant
> about the quality of our information.

Yes.  Exactly.

> One side benefit is
> that we probably don't have to worry about getting hosted on
> a super powerful computer.

Hmm.  Does OpenMind have a super powerful computer at their
disposal?

>
> Once we have something basic working then we will be able
> to publish an article such as "The [Open] Heart Logic
> Initiative" with a call-for-participation.

Yipee.  But, I'd rather get a lot of friends to try it out first.
Also, if we have something soonish, maybe we can get some subjects
from the UT subject pool to try it.  At the end of the semester
there are usually a bunch of kids who have procrastinated and missed
their chances to participate in required research.  As a result
they must do some dumb assigment which some poor slob grad student
must grade.  Well, we can save poor slob grad student IF our
system is working in time.  I'll need to check with Diane
if this is an option, but if you think that you can have something
which more or less replicates my dissertation ready by say Nov 25th,
then I should ask Diane soon if we can qualify for the study.  They
will have to make special exceptions since we did not go through
the usual application process back in Aug/Sept.

Bill
>
> --
> A new cognitive theory of emotion, http://savannah.nongnu.org/projects/aleader
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]