PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The main objective of this paper is to describe how Dempster-Shafer's (DS) theory of belief functions fits in the framework of valuation-based systems (VBS). Since VBS serves as a framework for managing uncertainty in expert systems, this facilitates the use of DS belief- function theory in expert systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss some topics of knowledge representation for reasoning under uncertainty in belief network-based systems. In particular, we will examine the seemly obvious but unduly overlooked phenomenon that belief functions can grow at an exponential rate when using Dempster's rule of combination, which often results in belief computations being done in nearly worst cases. This problem has severe practical consequences for the development of belief network-based systems in applications domains where the knowledge structure determines a dense belief network with high degrees of node linkage and especially with large clusterings of belief functions. Empirical evidence suggests that belief networks for some types of problem domain like classification tend to be very dense. Despite the development of efficient local computation schemes for belief propagation in general Dempster-Shafer belief networks, it can be concluded that the actual applicability of belief networks is limited subject to the knowledge structure of the problem domain under consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automative tools and systems for knowledge acquisition and refinement are primarily designed to relieve the well-attested burdens of knowledge engineers. The knowledge acquisition process may benefit from greater participation by the domain expert. A survey of automative knowledge acquisition tools and systems provides a context for end-user knowledge manipulation systems (EUKMS). Software support for knowledge acquisition achieved by EUKMS is presented. If given a pre-defined domain model, the implementation of elegant graphical user interfaces to knowledge-bases, a technique for the automatic conversion of a user's rules into code, plus browsing and editing facilities to modify the rules, represent a significant advance in the development of systems which enable a non-programming knowledge-worker to create and refine rules, without the mediation of knowledge engineers. A key element of EUKMS is knowledge representation at the interface; knowledge is externally represented to the knowledge-worker using domain familiar abstractions, language, and objects, which map to an internal, machine representation of knowledge, concealed from the user. An application of the approach in the creation of a system for the interpretation of speech spectrograms is discussed. The outcome of this application suggests that EUKMS are achievable and have an interesting role in knowledge acquisition for complex domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cognitive mapping is introduced as a tool for expert systems development having a role potentially equal to that of the data flow diagrams widely used in information systems development. The cognitive map is used to provide feedback to the domain expert, merge the knowledge of multiple experts, and provide a graphic representation from which the final rule- base is formed. The development of an expert system for loan evaluation is presented to illustrate these uses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we construct a propositional formula for each hypothesis in a knowledge base. This is done by traversing the knowledge network constructed from a given knowledge base. The basis for this work is the fact that, for any hypothesis f, both f and NOT f can not be proved simultaneously in a consistent knowledge base. Thus the presence of a satisfiable formula of the form (f and NOT f) indicates an inconsistency in the knowledge base. Since consistency checking in a knowledge base is proved to be an NP-complete problem, distributing the task will be advantageous. Unlike the earlier attempts to check inconsistency of knowledge bases, our approach gives flexibility to distribute the consistency checking, in a distributed environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 1986, de Kleer and Williams first described the general diagnostic engine (GDE), which combines simulation, truth maintenance, and information theory to perform model-based diagnosis (MBD) of complex physical devices. Currently, a large proportion of research within MBD follows the GDE paradigm. Most of this work applies to digital electronics topologies that lack feedback and are composed of standard component models such as adders and multipliers. We extend the GDE paradigm to physiological domains, where both modeling problems and feedback abound. To deal with the modeling differences between electronics and physiology, we generalize the GDE concept of component to mechanism, since mechanisms based on fundamental laws and feedback loops are the focal modules of many physiological systems. Using steady-state constraints to represent these mechanisms, we employ constraint propagation as a simulation/prediction engine. Although partially describable by steady-state equations, homeostatic systems also exhibit complex dynamic behaviors. In response to initial perturbations/faults, regulators cause physiological systems to evolve through a series of states, each characterized by a unique set of faults. We therefore view the diagnosis of regulated systems as a fourfold tasks: 1) find the fault sets of 'candidate' diagnoses within each state, 2) use static regulatory models to explain as many candidate faults as possible, 3) use dynamic regulatory models to link candidates from temporally adjacent states into global explanation chains, and 4) use these global chains and their estimated likelihoods as the information-theoretic basis for determining which variable to measure next, and when (i.e., in which state). So by focusing on physiological domains, we extend the GDE paradigm to diagnose time-varying systems with dynamic faults (i.e., those that do not necessarily persist throughout diagnosis).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An account is given of how knowledge-intensive explanatory reasoning can be used to construct an interpretation of ultrasonic signals indicating the existence of cracks and similar defects in solid materials; this AI 'abductive' reasoning is able to simultaneously inspect and classify the material sample in question. The abduction (logic-programming-based) engine used is implemented in the PROLOG code AMAL, and conducts backward chains that reduce observations to known facts via general laws. AMAL is capable of making assumptions in ways comparable to those recently explored in truth maintenance and logic programming frameworks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnosis of a malfunctioning physical system is the task of identifying those component parts whose failures are responsible for discrepancies between observed and correct system behavior. The goal of interactive diagnosis is to repeatedly select the best information- gathering action to perform until the device is fixed. We developed a probabilistic diagnosis theory that incorporates probabilistic reasoning into model-based diagnosis. In addition to the structural and functional information normally used in model-based diagnosis, probabilities of component failure are also used to solve the two major subtasks of interactive model-based diagnosis: hypothesis generation and action selection. This paper describes a model-based diagnostic system built according to our probabilistic theory. The major contributions of this paper are the incorporation of probabilistic reasoning into model-based diagnosis and the integration of repair as part of diagnosis. The integration of diagnosis and repair makes it possible to effectively troubleshoot failures in complex systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A diagnostic is inference to the state of a system from its observed and its expected behaviors. When no complete description is available--and this is not out of the ordinary in real-world applications--diagnosing a system cannot be done in a pure deductive way. To be more specific, deduction allows us to derive only partial diagnoses that must be completed to get closer to the actual one. Subsequently, searching for better diagnoses requires hypothetical reasoning, where the assumptions to be generated aim at reflecting the diagnostician beliefs. In the frame of hypothetico-deductive diagnostic, several approaches have been pointed out so far. The consistency-based method is the simplest one. It sanctions the lack of evidence that a component of a system fails by jumping to the conclusion that this component behaves correctly. In contrast to the consistency-based approach, the circumscription-based and the deductive/abductive methods take into account how components behave to complete what is deductively generated. This paper is devoted to a comparison of the consistency-based, the circumscription-based, and the deductive/abductive approaches to diagnostic. Its expected purpose is to provide a deeper understanding of both techniques. It is organized as follows: problem formulation and terminology are introduced in Section 2; Section 3 proposes a brief overview of the consistency-based, the circumscription-based, and the deductive/abductive methods; Section 4 details and compares the preference criteria each approach supports; Section 5 illustrates this comparison on a simple example; and Section 6 concludes this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A critical area in electronics assembly manufacturing is the test and repair area. Computerized decision aids at this area can facilitate enhanced system performance. A key to developing computer-based aids is gaining an understanding of the human problem solving process in the complex task of troubleshooting in electronics manufacturing. In this paper, we present a computational model of troubleshooting and learning in electronics assembly manufacturing. The model is based on a theory of knowledge representation, reasoning, and learning, which is grounded in observations of human problem solving. The theory provides a foundation for developing applications of AI in complex, real world domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex applications in artificial intelligence need a multiple representation of knowledge and tasks in terms of abstraction levels and points of view. The integration of numerous resources (knowledge-based systems, real-time systems, data bases, etc.), often geographically distributed on different machines connected into a network, is moreover a necessity for the development of real scale systems. The distributed artificial intelligence (DAI) approach is thus becoming important to solve problems in complex situations. There are several currents in DAI research and we are involved in the design of DAI programming platforms for large and complex real-world problem solving systems. Blackboard systems constitute the earlier architecture. It is based on a shared memory which permits the communication among a collection of specialists and an external and unique control structure. Blackboard architectures have been extended, especially to introduce parallelism. Multi-agent architectures are based on coordinated agents (problem-solvers) communicating most of the time via message passing. A solution is found through the cooperation between several agents, each of them being in charge of a specific task, but no one having sufficient resources to obtain a solution. Coordination, cooperation, knowledge, goal, plan, exchanges are then necessary to reach a global solution. Our own research is along this last line. The current presentation describes Multi-Agent Problem Solver (MAPS) which is an agent-oriented language for a DAI system design embedded in a full programming environment. An agent is conceived as an autonomous entity with specific goals, roles, skills, and resources. Knowledge (descriptive and operative) is distributed among agents organized into networks (agents communicate through message sending). Agents are moreover geographically distributed and run in a parallel mode. Our purpose is to build a powerful environment for DAI applications design that not only solve large problems, but also help in the formulation, description, and decomposition of a problem in terms of groups of intelligent agents. Several applications have been developed with MAPS in Computer Vision (KISS system), biomedical diagnosis (KIDS system), and speech understanding. The KISS system is presented to illustrate MAPS potentialities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large computer networks with dynamic links present special problems in adaptive routing. If the rate of change in the network links is fairly rapid and the changes are nonperiodic, then obtaining the optimal solution for adaptive routing becomes complex and expensive. In addition to the academic value of the solution, the growth of computer networks gives the problem practical importance. Learning automata is logical approach to the above problem. With the right parameter values, learning automata can converge arbitrarily close to the solution for a given network topology and set of conditions. The adaptability of automata reduces the depth of analysis needed for network behavior; the survivability and robustness of the network is also enhanced. Finally, each automaton behaves independently, making automata ideal for distributed decision-making, and minimizing the need for inter-node communication. Previous work on automata and network routing do not address how changes in network parameter values affect the performance of automata-based adaptive routing. Such knowledge is essential if we are to determine the suitability of an automata-based routing algorithm for a given network. Our paper focuses on this question and shows that in packet- switched datagram networks, relationships do indeed exist between network parameters and the performance of distributed adaptive routing algorithms. Additionally, our paper compares the performance and behavior of several types of learning automata, as well as changes in automata behavior over a range of reward and penalty values. Finally, the performance of two automata-based adaptive routing algorithms is compared. Our automaton model is a stochastic, linear, S-model automaton. In other words, the automaton's matrix of action probabilities changes as a result of performance feedback which it receives from the environment, the response to environment feedback is linear, and finally, the feedback it receives from the environment is over a continuous interval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heuristic search is a fundamental component of artificial intelligence applications. Because search routines are frequently a computational bottleneck, numerous methods have been explored to increase the efficiency of search. While sequential search methods use exponential amounts of storage and yield exponential run times, parallel algorithms designed for MIMD machines significantly reduce the time spent in search. In this paper, we present a massively- parallel SIMD approach to search named MIDA* search. The components of MIDA* include a very fast distribution algorithm which biases the search to one side of the tree, and an incrementally-deepening depth-first search of all the processors in parallel. We show the results of applying MIDA* to instances of the fifteen puzzle problem. Results reveal an efficiency of 76% and a speedup of 8553% and 492% over serial and 16- processor MIMD algorithms, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hardware accelerator that performs fuzzy learning, fuzzy inference, and defuzzification strategy computations is presented. The hardware is based on two-valued logic. A universal space of 25 elements with five levels each is supported. To achieve a high processing rate for real-time applications, the basic units of the accelerator are connected in a four-level pipeline. The accelerator can receive two parallel fuzzy data as inputs. A flag will be set if the fuzzy model R(u,w), constructed in a learning process, exhibits the property as follows: for all (u,w) belonging to the set UXW, R(u,w) equals 1. At a clock rate of 20 MHz, the accelerator can perform more than 1,400,000 fuzzy logic inferences per second on multi- dimensional fuzzy data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new model to parallelize the backward-chaining inference technique in production systems called ParBack is presented. In this model, the data dependencies between rules are analyzed and then converted to special notations that constitute the search space. Parallelism is exploited in three directions: some processors are dedicated to perform the inference process; other processors perform processes that guide the inference process to the useful paths in the search space; and a third group of processors performs the rule-pruning principle that reduces the length of the paths in the search space. The results of the simulation study on ParBack show that around 850 folds of speed-up can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an extension to the COBWEB conceptual clustering algorithm. The extension is designed to allow for nonmutually exclusive examples to be clustered. It is also designed to allow for fuzzy examples to be clustered, which has the side effect of putting examples into more than one cluster or class. A discussion of some related work and an ideal fuzzification are presented. Preliminary results are shown with one fuzzy data set on creditworthiness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A good teacher will provide crucial information about a new task, rather than simply performing examples with no elaboration. Machine learning paradigms have ignored this form of instruction, concentrating on induction over multiple examples, or knowledge-based generalization. This paper presents a model of supervised task learning designed to exploit communicative acts. Instruction is viewed as planned explanation, and plan recognition is applied to the problem at both domain and discourse levels, and extended to allow the learner to have incomplete knowledge. The model includes a domain level plan recognizer and a discourse level plan recognizer that cues a third level of plan structure rewriting rules. The rewriter may add new domain operator schemata. Details are given of an example in which a robot apprentice is instructed in the building of arches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A feed-forward neural net with backpropagation has proved to be a better predictor of economic forecasts than traditional statistics-based systems. A market price of a stock can be predicted by training the net on its price data in the past several months along with relevant economic parameters. ID3 extension, trained on the same data, can give explanations, based on the classifications it makes. The latter of the two similarity-based reasoners, i.e., case- based reasoner and Grossberg net, if adapting its vigilance parameter appropriately, can adequately classify a newly coming data in a dynamically changing environment, keeping a history of stock price transition. However, the economic world is undergoing neverending changes, under the competition of several antagonistic factors, which are modeled using rules that form a subset of rule-based subcomponent of the case-based reasoner, which are triggered when the system judges that the arriving case is not in the past experience. By reviewing predictions and realization the genetic algorithm system, applying its reproduction, crossover, and mutation operators, adapts the configuration of backprop/ID3 and that of Grossberg net along with its vigilance parameter, and evolves the economic rules, so that the CBR can do as much stable work as possible for some time, based on its past experience into which the newly arrived cases have just been integrated. CBR can give various sorts of useful explanations and can be used to construct extended works, such as portfolio organization, for example, on top of the prediction and history the system has experienced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One major bottleneck in the automation of the drilling process by robots in the aerospace industry is the drill condition monitoring. The effort in solving this problem has resulted in the development of an intelligent drilling machine to work with industrial robots. This computer- controlled drilling machine has five built-in sensors. Based on the pattern recognition of sensor outputs with sets of algorithms, an intelligent diagnostic system has been developed for on-line detection of and pin pointing of any one of the nine drill failures: chisel edge wear, margin wear, breakage, flank wear, crater wear, lip height difference, corner wear, and chip at lips. However, the complexity of the manufacturing process has many inherent factors which affect the criteria for the judgment of drill wear/breakage, such as drill size, drill material, drill geometry, workpiece material, cutting speed, feedrate, material microstructure, hardness distribution, etc. The self-learning system is therefore required and has been implemented, enabling the machine to acquire the knowledge needed to judge the drill condition regardless of the complex situation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms (GAs) are a class of probabilistic optimization algorithms which utilize ideas from natural genetics. In this paper, we apply the genetic algorithm to a difficult machine learning problem, viz., to learn the description of pushdown automata (PDA) to accept a context-free language (CFL), given legal and illegal sentences of the language. Previous work has involved the use of GAs in learning descriptions for finite state machines for accepting regular languages. CFLs are known to properly include regular languages, and hence, the learning problem addressed here is of a greater complexity. The ability to accept context free languages can be applied to a number of practical problems like text processing, speech recognition, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detailing the problem and writing the problem specification are vital to the successful implementation of any potential automated inspection system. In practice, arriving at a well defined problem specification is extremely difficult and many well designed systems have failed in implementation due to an inadequate problem specification. This paper presents a knowledge-based technique for developing a problem specification for dimensional measurement applications of machine vision. The motivation for developing a new systematic and computerized approach to problem specification is briefly covered. The peculiarity of this particular application and how it influences the knowledge-based system are discussed. This paper also details the methodology used for knowledge acquisition and refinement. Finally, the implementation of this new approach is discussed. A comprehensive knowledge-based tool was designed to clearly and precisely define production problems and to aid in accurate specifications for the development of machine vision systems. This tool uses an interactive computer program that takes potential users, with various backgrounds and interests, systematically through a step-by-step interrogation of the particular application. It then outputs a complete problem specification that will aid in the development of a specialized system. This tool consists of modules of questions which are intelligently selected appropriate to the different job descriptions and areas of expertise of the respondents. This method of problem specification has proven to be useful and effective for both the machine vision novice and expert. Examples of this method are given for machine vision applications and implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses current computer-aided software design issues centered on expert system technologies, including our work results through a typical example, and also gives an introduction to a design method oriented toward advanced software engineering, from a conceptual point of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A case associative mobile robot planning system (CAMRPS) which integrates memory organization is being developed. The purpose of the CAMRPS is to provide the robot with an environment in which it can think of planning in terms of high level tasks and synthesize such plans rapidly. At all stages of the planning process it can consult the case associative memory (CAM) to see what experience knows of similar plans. Efficient use of prior experiences is emphasized. The CAMRPS remembers and recollects all the cases on the basis of internal similarity between cases. With similarity metric, all old cases are grouped into clusters, which of the same commonality metric in the memory. New cases are self-organized into a new cluster or a pre-existing cluster according to the similarity comparison. Generally speaking, a hierarchical indexing structure on CAMRPS is constructed dynamically and extended as the system gradually accumulates new experiences. The framework of the CAMRPS, hierarchical structure of the CAM, and an illustrated example will be given in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines various issues related to the theory, design, and implementation of a system that supports creative activity for a multimedia environment. The system incorporates artificial intelligence notions to acquire concepts of the problem domain. This paper investigates this environment by considering a model that is a basis for a system, which supports a history of user interaction. A multimedia system that supports creative activity is problematic. It must function as a tool allowing users to experiment dynamically with their own creative reasoning process--a very nebulous task environment. It should also support the acquisition of domain knowledge so that empirical observation can be further evaluated. This paper aims to illustrate that via the reuse of domain-specific knowledge, closely related ideas can be quickly developed. This approach is useful in the following sense: Multimedia navigational systems hardcode referential links with respect to a web or network. Although users can access or control navigation in a nonlinear (static) manner, these referential links are 'frozen' and can not capture their creative actions, which are essential in tutoring or learning applications. This paper describes a multimedia assistant based on the notion of knowledge- links, which allows users to navigate through creative information in a nonlinear (dynamic) fashion. A selection of prototype code based on object-oriented techniques and logic programming partially demonstrates this.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Car traffic is one of the widespread applications where domain experts need to employ different kinds of knowledge for diagnosing a single fault. The application provides application-oriented kinds like fault trees, heuristics, fault models, and functional models. Hence, for knowledge based diagnosis, a problem solving method is needed which is able to evaluate multiple kinds of knowledge. The presented approach uses knowledge compilation to transpose knowledge of different kinds to one target kind. The required method is developed as a method using the target kind. The problem of specifying a target kind which overlaps with application-oriented kinds, however, is approached by starting from the task of diagnosis which defines a task-oriented kind covering knowledge necessary for solving the task. Since necessary knowledge must be available in application-oriented kinds, the task-oriented kind overlaps with each of them, and its problem solving method serves as the required method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important problem in developing natural language dialog systems is to computationally specify when and why the system should speak. This paper proposes interruptible theorem proving as a solution. Theorem proving is used to determine when domain goals are complete. Language is used to acquire missing axioms that may be inhibiting proof completion. The paper describes this missing axiom theory for use of language and how it enables the needed dialog processing behaviors to be achieved. The theory is illustrated with a sample dialog segment obtained from actual use of an implemented dialog system. Performance results of this system based on more than 140 dialogs are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anaphors, like polysemy, are basic natural components of language. Thus, we show how we manage such phenomena in the context of our development platform DOCAL at the University of Technology of Compiegne. It is understood that these problems are too vast to be directly tackled in their entirety. Thus our field of study is restricted for example by being limited to just endophoric pronominal references. We touch upon the morpho-syntactic filtering of polysemic expressions in order to concentrate more fully on the semantic analysis. To solve the ambiguities, our system relies on the notion of semantic markers. If this is not sufficient, the defined rules at the level of the semantic dictionary are called upon and which restrain the interaction between the different concepts in the sentence. Finally, if several interpretations are still plausible, the system will choose the one that best integrates in its knowledge based. Finally, we elaborate the mechanisms that have just been presented and above all the assimilation module of the resulting semantic networks to the processing of certain endophoric pronominal references.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AI systems for the general public have to be really tolerant to errors. These errors could be of several kinds: typographic, phonetic, grammatical, or semantic. A special lexical dictionary architecture has been designed to deal with the first two. It extends the hierarchical file method of E. Tanaka and Y. Kojima.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The amount of information is becoming greater and greater, in industries where complex processes are performed it is becoming increasingly difficult to profit from all the documents produced when fresh knowledge becomes available (reports, experiments, findings). This situation causes a considerable and expensive waste of precious time lost searching for documents or, quite simply, results in outright repeating what has been done. One solution is to transform all paper information into computerized information. We might imagine that we are in a science-fiction world and that we have the perfect computer. We tell it everything we know, we make it read all the books, and if we ask it any question, it will find the response if that response exists. But unfortunately, we are in the real world and the last four decades have taught us to minimize our expectations of computers. During the 1960s, the information retrieval systems appeared. Their purpose is to provide access to any desired documents, in response to a question about a subject, even if it is not known to exist. Here we focus on the problem of selecting items to index the documents. In 1966, Salton identified this problem as crucial when he saw that his system, Medlars, did not find a relevant text because of the wrong indexation. Faced with this problem, he imagined a guide to help authors choose the correct indexation, but he anticipated the automation of this operation with the SMART system. It was stated previously that a manual language analysis for information items by subjects experts is likely to prove impractical in the long run. After a brief survey of the existing responses to the index choice problem, we shall present the system automatic natural acquisition (ANA) which chooses items to index texts by using as little knowledge as possible- -just by learning the language. This system does not use any grammar or lexicon, so the selected indexes will be very close to the field concerned in the texts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The HIRONDELLE research project of the Banque de France intends to summarize economic surveys giving statements about a specific economic domain. The principal goal is the detection of causal relations between economic events appearing in the texts. We will focus on knowledge representation, based on three distinct hierarchical structures. The first one concerns the lexical items and allows inheritance of syntactic properties. Descriptions of the applications domains are achieved by a taxonomy based on attribute-value models and case relations, adapted to the economic sectors. The summarization goal of this system defines a set of primitives representing statements and causality meta-language. The semantic analysis of the texts is based on two phases. The first one leads to a propositional representation of the sentences through conceptual graphs formalization, taking into account the syntactic transformations of sentences. The second one is dedicated to the summarizing role of the system, detecting paraphrastic sentences by processing syntactic and semantic transformations like negation or metonymious constructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most AI planning systems have considered time in a qualitative way only. For example, a plan may require one action to come 'before' another. Metric time enables AI planners to represent action durations and reason over quantitative temporal constraints such as windows of opportunity. This paper presents preliminary results observed while developing a theory of multi-agent adversarial planning for battle management research. Quantitative temporal reasoning seems essential in this domain. For example, Orange may plan to block Blue's attack by seizing a river ford which Blue must cross, but only if Orange can get there during the window of opportunity while Blue is approaching the ford but has not yet arrived. In nonadversarial multi-agent planning, metric time enables planners to detect windows of opportunity for agents to help or hinder each other. In single-agent planning, metric time enables planners to reason about deadlines, temporally constrained resource availability, and asynchronous processes which the agent can initiate and monitor. Perhaps surprisingly, metric time increases the computational complexity of planning less than might be expected, because it reduces the computational complexity of modal truth criteria. To make this observation precise, we review Chapman's analysis to modal truth criteria and describe a tractable heuristic criterion, 'worst case necessarily true.' Deciding if a proposition is worst case necessarily true, in a single-agent plan with n steps, requires O(n) computations only if qualitative temporal information is used. We show how it can be decided in O(log n) using metric time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based diagnosis (MBD) has been applied to a variety of mechanisms, but few of these have been in fluid flow domains. Important mechanism variables in these domains are continuous, and the mechanisms commonly contain complex recycle patterns. These properties violate some of the common assumptions for MBD. The CO2 removal assembly (CDRA) for the cabin atmosphere aboard NASA's Space Station Freedom is such a mechanism. Early work on diagnosing similar mechanisms showed that purely associative diagnostic systems could not adequately handle these mechanisms' frequent reconfigurations. This suggested a model-based approach and KATE was adapted to the domain. KATE is a constraint-based MBD shell. It has been successfully applied to liquid flow problems in handling liquid oxygen. However, that domain does not involve complex recycle streams, but the CDRA does. KATE had solved constraint sets by propagating parameter values through constraints; this method often fails on constraints sets which describe recycle systems. KATE was therefore extended to allow it to use external algebraic programs to solve its constraint sets. This paper describes the representational challenges involved in that extension, and describes adaptions which allowed KATE to work within the representational limitations imposed by those algebraic programs. It also presents preliminary results of the CDRA modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleology descriptions capture the purpose of an entity, mechanism, or activity with which they are associated. These descriptions can be used in explanation, diagnosis, and design reuse. We describe a technique for acquiring teleological descriptions expressed in the teleology language TeD. Acquisition occurs during design by observing design modifications and design verification. We demonstrate the acquisition technique in an electronic circuit design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex systems are not amenable to the use of a single model or level of abstraction in describing their dynamic system characteristics. It is often necessary to tie several models together if we want to reason, simulate, or analyze the system. Moreover, the use of a hierarchical representation helps us to more intelligently organize the models. We discuss a modeling process called heterogeneous hierarchical modeling (HHM) which supports multiple representations and supports hierarchical development of time dependent knowledge. Additionally, a hierarchical heterogeneous model provides a natural structure for knowledge- based reasoning. A knowledge-based environment requires such models in order to suggest improvements, guide semantic development, or analyze autonomous decision making models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the object oriented programming paradigm is an intuitive embodiment of the static attributes of an application, temporal behavior of object interaction is typically encrypted in the distributed control structure of the implementation. TOM requires a platform-independent operating system support library which permits the arbitrary scheduling of object message passing. Applications include systems which reason through time; the arbitration of distributed, real time competing and cooperating reasoning systems; and the rapid construction of simulators for reasoning system validation. Performance and applicability of the package are currently being evaluated via several tactical command and control development systems. TOM permits the arbitrary allocation of objects between processing platforms, i.e., object allocation need not be known at design time. Message passing is extended through host LANs when necessary to reach remote objects. Three prototype temporal behaviors are provided: single, cyclic, and frequency limited. Scheduled message services are qualified with a user- assigned priority which is used to arbitrate host computing resources. Discussion highlights the seamless integration of temporal activity into the object oriented paradigm and demonstrates the benefits of the package through several diverse example applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a simple experiment to test the application of genetic algorithms to learning scheduling heuristics. A very simple genetic representation is used which can represent a wide variety of schedulers. We describe a simple experiment using this representation and show how the learning technique produces schedulers which are adapted to the scheduling scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The school bus routing problem involves transporting students from predefined locations to the school using a fleet of school buses with varying capacity. The objective is to minimize the fleet size in addition to minimizing the distance traveled by the buses and the travel time of the students. As the school bus routing problem belongs to the NP-complete class of problems, search strategies based on heuristic methods are most promising for problems in this class. GENROUTER is a system that uses genetic algorithms, an adaptive heuristic search strategy, for routing school buses. The GENROUTER system was used to route school buses for two school districts. The routes obtained by GENROUTER system were superior to those obtained by the CHOOSE school bus routing system and the current routes in use by the two school districts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An established way to synthesize associative memory networks is to use dynamical neural networks. For large dimensional problems, the dynamical networks usually are computationally burdensome to design and generally introduce spurious memories. A new architecture that consists of an input linear filter, a hidden layer of dynamical network and an output linear filter is proposed in this paper to alleviate some of the difficulties in designing large dimensional dynamical networks. A learning rule and its simplified version are presented for the design of the network parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an architecture for realizing the high quality production schedules. Although quality is one of the most important aspects of production scheduling, it is difficult even for a user to specify precisely. However it is also true that the decision whether a schedule is good or bad can be taken only by a user. This paper proposes the following. The quality of a schedule can be represented in the form of quality factors, i.e., constraints and objectives of the domain, and their structure. Quality factors and their structure can be used for decision making at local decision points during the scheduling process. They can be defined via iteration of user specification processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing the electrical supply system for new residential developments (plat design) is an everyday task for electric utility engineers. Presently this task is carried out manually resulting in an overdesigned, costly, and nonstandardized solution. As an ill-structured and open-ended problem, plat design is difficult to automate with conventional approaches such as operational research or CAD. Additional complexity in automating plat design is imposed by the need to process spatial data such as circuits' maps, records, and construction plans. The intelligent decision support system for automated electrical plate design (IDSS for AEPD) is an engineering tool aimed at automating plate design. IDSS for AEPD combines the functionality of geographic information systems (GIS) a geographically referenced database, with the sophistication of artificial intelligence (AI) to deal with the complexity inherent in design problems. Blackboard problem solving architecture, concentrated around INGRES relational database and NEXPERT object expert system shell have been chosen to accommodate the diverse knowledge sources and data models. The GIS's principal task it to create, structure, and formalize the real world representation required by the rule based reasoning portion of the AEPD. IDSS's capability to support and enhance the engineer's design, rather than only automate the design process through a prescribed computation, makes it a preferred choice among the possible techniques for AEPD. This paper presents the results of knowledge acquisition and the knowledge engineering process with AEPD tool conceptual design issues. To verify the proposed concept, the comparison of results obtained by the AEPD tool with the design obtained by an experienced human designer is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system is designed for the configuration of industrial mixing-machines is assist members of the sales department. The process of configuration is split in two parts. In the process- engineering part, the mixing-machine is characterized by the kind of agitator and the number of revolutions in respect to the mixing task. In the second part, all components of the mixing- machine are determined with regard to the laws of mechanics. Different kinds of knowledge had to be implemented. Structural knowledge is represented by objects whereas algorithmic knowledge is coded in functions. Directed constraints are used to express causal dependencies. Technical parameters and available parts are stored in databases. These databases also serve as an interface to the existing CIM-system. The database-format chosen allows an easy maintenance of the data by various tools. The process of configuration is characterized by loops of feedback. Often parameters are needed before they can be computed, therefore they have to be pre-estimated. Later on they are checked and probably re-computed. Main decisions in the process of configuration are done by the user. The system supports the user by suggesting the cheapest components. The development of the system and the need for explanation of the expert knowledge has had an interesting side-effect: the revision of various methods and standards of the manufacturer. It turned out, that there are two different kinds of validation: the validation of process-engineerical and mechanical knowledge done by the experts and the usefulness for daily work examined by members of the sales department.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a mixed, monitor/database-centered approach in order to assist and guide both experienced and novice final users. To achieve this objective DeBuMAII is able to acquire and use the knowledge of the different interlocutors: algorithm specialist, library manager, and application builder. During the design process, DeBuMAII, with the help of knowledge previously acquired, proposes the tasks and/or the data to the final user so he may go ahead with the problem resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.