Artificial Intelligence

   

Non Bayesian Conditioning and Deconditioning

Authors: Jean Dezert, Florentin Smarandache

In this paper, we present a Non-Bayesian conditioning rule for belief revision. This rule is truly Non-Bayesian in the sense that it doesn't satisfy the common adopted principle that when a prior belief is Bayesian, after conditioning by X, Bel(X|X) must be equal to one. Our new conditioning rule for belief revision is based on the proportional conflict redistribution rule of combination developed in DSmT (Dezert-Smarandache Theory) which abandons Bayes' conditioning principle. Such Non-Bayesian conditioning allows to take into account judiciously the level of conflict between the prior belief available and the conditional evidence. We also introduce the deconditioning problem and show that this problem admits a unique solution in the case of Bayesian prior; a solution which is not possible to obtain when classical Shafer and Bayes conditioning rules are used. Several simple examples are also presented to compare the results between this new Non-Bayesian conditioning and the classical one.

Comments: 6 pages

Download: PDF

Submission history

[v1] 20 May 2010

Unique-IP document downloads: 78 times

Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.

comments powered by Disqus