Within the framework of discrete probabilistic uncertain reasoning a large literature exists justifying the maximum entropy inference process, $\ME$, as being optimal in the context of a single agent whose subjective probabilistic knowledge base is consistent. In particular Paris and Vencovská completely characterised the $\ME$ inference process by means of an attractive set of axioms which an inference process should satisfy. More recently the second author extended the Paris-Vencovská axiomatic approach to inference processes in the context of several agents whose subjective probabilistic knowledge bases, while individually consistent, may be collectively inconsistent. In particular he defined a natural multi-agent extension of the inference process $\ME$ called the social entropy process, $\SEP$. However, while $\SEP$ has been shown to possess many attractive properties, those which are known are almost certainly insufficient to uniquely characterise it. It is therefore of particular interest to study those Paris-Vencovská principles valid for $\ME$ whose immediate generalisations to the multi-agent case are not satisfied by $\SEP$. One of these principles is the Irrelevant Information Principle, a powerful and appealing principle which very few inference processes satisfy even in the single agent context. In this paper we will investigate whether $\SEP$ can satisfy an interesting modified generalisation of this principle.
maximum entropy, uncertain reasoning, discrete probability function, social inference process, Kullback-Leibler, irrelevant information principle
03B42, 03B48, 68T37