Computation with Imprecise Probabilities—A Bridge to Reality

Lotfi A. Zadeh[1]*

 

Extended Abstract

An imprecise probability distribution is an instance of second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty2 for short. Another instance is an imprecise possibility distribution.  Computation with imprecise probabilities is not an academic exercise—it is a bridge to reality.  In the real world, imprecise probabilities are the norm rather than exception. In large measure, real-world probabilities are perceptions of likelihood. Perceptions are intrinsically imprecise, reflecting the bounded ability of human sensory organs, and ultimately the brain, to resolve detail and store information. Imprecision of perceptions is passed on to perceived probabilities. This is why real-world probabilities are, for the most part, imprecise.

What is important to note is that in applications of probability theory in such fields as risk assessment, forecasting, planning, assessment of causality and fault diagnosis, it is a common practice to ignore imprecision of probabilities. The problem with this practice is that it leads to results whose validity is in doubt. This underscores the need for approaches in which imprecise probabilities are treated as imprecise probabilities rather than as precise probabilities.

Peter Walley's seminal work "Statistical Reasoning with Imprecise Probabilities," published in l99l, sparked a rapid growth of interest in imprecise probabilities.  Today, we see a substantive literature, conferences, workshops and summer schools. An exposition of mainstream approaches to imprecise probabilities may be found in the 2002 special issue of the Journal of Statistical Planning and Inference (JSPI), edited by Jean-Marc Bernard. My paper "A perception-based theory of probabilistic reasoning with imprecise probabilities," is contained in this issue but is not a part of the mainstream. A mathematically rigorous treatment of elicitation of imprecise probabilities may be found in "A behavioural model for vague probability assessments," by Bert de Cooman, Fuzzy Sets and Systems, 2005.

The approach which is outlined in the following is rooted in my l975 paper "The concept of a linguistic variable and its application to approximate reasoning," Information Sciences, but in spirit it is close to my 2002 JSPI paper.  The approach is a radical departure from the mainstream. Its principal distinguishing features are: (a) imprecise probabilities are dealt with not in isolation, as in the mainstream approaches, but in an environment of imprecision of events, relations and constraints; (b) imprecise probabilities are assumed to be described in a natural language. This assumption is consistent with the fact that a natural language is basically a system for describing perceptions.

The capability to compute with information described in a natural language opens the door to consideration of problems which are not well-posed mathematically. Following are very simple examples of such problems.

 

  1. X is a real-valued random variable. What is known about X is: (a) usually X is much larger than approximately a; and (b) usually X is much smaller than approximately b, with a < b.  What is the expected value of X?
  2. X is a real-valued random variable. What is known is that Prob(X is small) is low; Prob(X is medium) is high; and Prob(X is large) is low.  What is the expected value of X?
  3. A box contains approximately twenty balls of various sizes. Most are small. There are many more small balls than large balls. What is the probability that a ball drawn at random is neither large nor small?
  4. I am checking-in for my flight. I ask the ticket agent: What is the probability that my flight will be delayed.  He tells me:  Usually most flights leave on time. Rarely most flights are delayed.  How should I use this information to assess the probability that my flight may be delayed?

 

To compute with information described in natural language we employ the formalism of Computing with Words (CW) (Zadeh l999) or, more generally, NL-Computation (Zadeh 2006).  The formalism of Computing with Words, in application to computation with information described in a natural language, involves two basic steps: (a) precisiation of meaning of propositions expressed in natural language; and (b) computation with precisiated propositions. Precisiation of meaning is achieved through the use of generalized-constraint-based semantics, or GCS, for short.  The concept of a generalized constraint is the centerpiece of GCS. Importantly, generalized constraints, in contrast to standard constraints, have elasticity. What this implies is that in GCS everything is or is allowed to be graduated, that is, be a matter of degree. Furthermore, in GCS everything is or is allowed to be granulated.  Granulation involves partitioning of an object into granules, with a granule being a clump of elements drawn together by indistinguishability, equivalence, similarity, proximity or functionality.

A generalized constraint is an expression of the form X isr R, where X is the constrained variable, R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics.  The principal modalities are: possibilistic (r = blank), probabilistic (r = p), veristic (rv), usuality (r = u) and group (r = g). The primary constraints are possibilistic, probabilistic and veristic. The standard constraints are bivalent possibilistic, probabilistic and bivalent veristic. In large measure, scientific theories are based on standard constraints.

Generalized constraints may be combined, projected, qualified, propagated and counterpropagated.  The set of all generalized constraints, together with the rules which govern generation of generalized constraints from other generalized constraints, constitute the Generalized Constraint Language (GCL).  Actually, GCL is more than a language—it is a language system.  A language has descriptive capability.  A language system has descriptive capability as well as deductive capability. GCL has both capabilities.

The concept of a generalized constraint plays a key role in GCS. Specifically, it serves two major functions. First, as a means of representing the meaning of a proposition, p, as a generalized constraint; and second, through representation of p as a generalized constraint it serves as a means of dealing with p as an object of computation.  It should be noted that representing the meaning of p as a generalized constraint is equivalent to precisiation of p through translation into GCL.  In this sense, GCL plays the role of a meaning precisiation language. More importantly, GCL provides a basis for computation with information described in a natural language. This is the province of CW or, more generally, NL-Computation. 

A concept which plays an important role in computation with information described in a natural language is that of a granular value.  Specifically, let X be a variable taking values in a space U. A granular value of X, *u, is defined by a proposition, p, or more generally by a system of propositions drawn from a natural language.  Assume that the meaning of p is precisiated by representing it as a generalized constraint, GC(p). GC(p) may be viewed as a definition of the granular value, *u.  For example, granular values of probability may be defined as approximately 0.l, ..., approximately 0.9, approximately l. A granular variable is a variable which takes granular values.  For example, young, middle-aged and old are granular values of the granular variable Age. The probability distribution in Example 2 is an instance of a granular probability distribution.  In effect, computation with imprecise probability distributions may be viewed as  an instance of computation with granular probability distributions.     

In the CW-based approach to computation with imprecise probabilities, computation with imprecise probabilities reduces to computation with generalized constraints.  What is used for this purpose is the machinery of GCL. More specifically, computation is carried out through the use of rules which govern propagation and counterpropagation of generalized constraints. The principal rule is the extension principle (Zadeh l965, l975). In its general form, the extension principle is a computational schema which relates to the following problem.  Assume that Y is a given function of X, Y = g(X). Let *g and *X be granular values of g and X, respectively. Compute *g(*X).

In most computations involving imprecise probabilities what is sufficient is a special form of the extension principle which relates to possibilistic constraints. More specifically, assume that f is a given function and f(X) is constrained by a possibility distribution, A.  Assume that g is a given function, g(X).  The problem is to compute the possibility distribution of g(X) given the possibility distribution of f(X). In this case, the extension principle reduces the solution of the problem in question to the solution of a variational problem (Zadeh 2006).

In summary, the CW-based approach to computation with imprecise probabilities opens the door to computation with probabilities, events, relations and constraints which are described in a natural language.  Progression from computation with precise probabilities, precise events, precise relations and precise constraints to computation with imprecise probabilities, imprecise events, imprecise relations and imprecise constraints is an important step forward—a step which has the potential for a significant enhancement of the role of natural languages in human-centric fields such as economics, decision analysis, operations research, law and medicine, among others.

 

 

Neuroeconomics: yet another field where rough sets can be useful?

 

Janusz Kacprzyk, Fellow of IEEE

 

Systems Research Institute, Polish Academy of Sciences

ul. Newelska 6, 01–447 Warsaw, Poland

E-mail: kacprzyk@ibspan.waw.pl

WWW: www.ibspan.waw.pl/ kacprzyk

Google: kacprzyk

 

Abstract

 

We deal with neuroeconomics which may be viewed as a new emerging field of research at the crossroads of economics, or decision making, and brain research. Neuroeconomics is basically about neural mechanisms involved in decision making and their economic relations and connotations. We briefly review first the traditional formal approach to decision making, then discuss some experiments of real life decision making processes and point our when and where the results prescribed by the traditional formal models are not confirmed. We deal with both decision analytic and game theoretic type models. Then, we discuss results of brain investigations which indicate which parts of the brain are activated while performing some decision making related courses of action and provide some explanation about possible causes of discrepancies between the results of formal models and experiments. We point out the role of brain segmentation techniques to determine the activation of particular parts of the brain, and point out that the use of some rough sets approaches to brain segmentation, notably by Hassanien, ´Sl¸ezak and their collaborators, can provide useful and effective tool.

 

 

 

Research Directions in the KES Centre

 

Lakhmi Jain

 

School of Electrical and Information Engineering,

Knowledge Based Intelligent Engineering Systems Centre,

University of South Australia, Mawson Lakes, SA 5095, Australia.

Lakhmi.Jain@unisa.edu.au

 

Abstract

 

The ongoing success of the Knowledge-Based Intelligent Information and Engineering Systems (KES) Centre has been stimulated via collaborated with industry and academia for many years. This Centre currently has adjunct personnel and advisors that mentor or collaborate with its students and staff from Defence Science and Technology Organisation (DSTO), BAE Systems (BAE), Boeing Australia Limited (BAL), Ratheon, Tenix, the University of Brighton, University of the West of Scotland, Loyola College in Maryland, University of Milano, Oxford University, Old Dominion University and University of Science Malaysia. Much of our research remains unpublished in the public domain due to these links and intellectual property rights. The list provided is non-exclusive and due to the diverse selection of research activities, only those relating to Intelligent Agent developments are presented.

 

 



[1] Dedicated to Peter Walley.

* Department of EECS, University of California, Berkeley, CA 94720-1776; Telephone: 510-642-4959; Fax: 510-642-1712;

E-Mail: zadeh@eecs.berkeley.edu . Research supported in part by ONR N00014-02-1-0294, BT Grant CT1080028046, Omron Grant, Tekes Grant, Chevron Texaco Grant and the BISC Program of UC Berkeley.