Newsgroups: rec.arts.int-fiction
Path: gmd.de!xlink.net!sol.ctr.columbia.edu!news.kei.com!ub!acsu.buffalo.edu!goetz
From: goetz@cs.buffalo.edu (Phil Goetz)
Subject: Re: IF complexity
Message-ID: <CFBM9w.1on@acsu.buffalo.edu>
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: shaula.cs.buffalo.edu
Organization: State University of New York at Buffalo/Comp Sci
References: <2a06f9$2g2@Notwerk.mcs.com> <CF5qr4.6yA@acsu.buffalo.edu> <2a1en5$4fb@Notwerk.mcs.com>
Date: Fri, 22 Oct 1993 22:53:56 GMT
Lines: 87

In article <2a1en5$4fb@Notwerk.mcs.com> jorn@Notwerk.mcs.com (Jorn Barger) writes:
>Phil Goetz writes:
>>I basically mean a top-level control sequence that determines how different
>>components are related (e.g. sensory input, knowledge base, pattern-matching,
>>inference mechanisms, and action) and decides what to do next. [...]
>>A cognitive architecture isn't an IF platform, but a model of one person
>>(an agent). 
>
>ooch!  i guess i deny that those 'components' are theoretically distinct,
>in any useful way.

You may be right.  But you may be wrong.  Certainly there are "modules"
in the mind (though not nec. as distinct as Jerry Fodor insists):
visual system (occipital), auditory system (the planum temporale, folded
under the Sylvian fissure), nouns (left medial temporal), verbs (possibly
left dorsolateral prefontal), proper nouns (left anterior temporal), grammar
(left insula - NOT Broca's area, according to recent work here at UB),
spatial location (dorsal occipital and parietal), object properties
(ventral occiptal and parietal), planning (frontal) - all these things are
localized in different areas of the brain.

Perhaps the components I listed in my previous message are the wrong
way to divide things up.  But the idea is the same.  If you came up
with a method to avoid any distinctions, that was specific enough to be
_implementable_, yet did interesting things, I suppose you might call
that a cognitive architecture (of an unusual type).  It would be a little
like calling a pile of sand a building architecture, though.

>>I'm not implying that the effects of nested beliefs would be narrated,
>>just that they would automatically take effect without any special control
>>by the author.  Phenomena like your example could happen without anyone,
>>including the author, knowing it, unless they either guessed from a
>>character's actions or examined the semantic network.
>
>From your perspective, does it seem like the task of accumulating a
>knowledgebase of stories is like 'cheating'?  that behavior isn't
>interesting unless it's built from simpler components?  Because I
>can't really see how nested beliefs can 'take effect' unless someone 
>has coded whatever sorts of effects nested beliefs can have, which
>is storytelling again.

You are interested in pattern-matching.  I believe a generative system
is needed, which can predict effects never before seen.  I don't see
nested beliefs as being any special case in this regard.  I think you
can do a lot with a knowledgebase of stories; probably one is needed.
In my case, the knowledgebase of stories would tell you what you can
infer as possible actions of characters in different situations.
It would also be essential for a computerized playwright.
But for the simulation of a character's actions, you can only go so far
with just a bunch of stories.  (Remember, I'm from the wannabe-physicist
IF camp.)

>>Another thing SNePS has is a belief revision system.
>>Say your NPC Annie has decided to throw herself off the cliff based on her
>>belief that Jethro doesn't love her anymore, and Jethro comes and tells her
>>he loves her.  Without a belief revision system, Annie will say "Good"
>>and throw herself off the cliff.  Her plan to do so was orginally
>>derived from a belief B, but removing B doesn't remove the inferences and
>>plans based on it.  With a belief revision system, all the inferences and
>>plans based on B will also be removed when B is removed, so Annie won't
>>throw herself off the cliff.
>
>I'd expect to handle this with 'belief-revision stories' of the form:
>
>A believes X
>A intends to enact story S
>A learns Y (not X)
>A abandons/modifies plan S
>
>...so all these algorithms reduce to pattern-matching...

If you try to do it this way, you will end up building a belief-revision
system.  "A believes X" could stand for any one of dozens of beliefs
that led to A's decision to enact story S.  When A discovers ~X (not X),
you have to be able to take X and find S somehow.  This means, for every
belief X that leads to S, you must record on X that it leads to S.
Then when you find ~X, you change S.  This is a belief revision system,
done the same way SNePS does it.

If you don't save the information that X led to S, you CAN'T just pattern
match and find the appropriate S for some X.  No way.  You would have to
reduplicate all inferences from X.  If you belief you can find S from X
with just pattern-matching, then there's no need for inference; you can
do all forward-chaining and backward-chaining with your story library.
I say you can't.  Inference is definitely needed.

Phil goetz@cs.buffalo.edu
