top of page
Search
Writer's pictureDR.GEEK

Conditional Probability

(26th-August-2020)


• Typically, we do not only want to know the prior probability of some proposition, but we want to know how this belief is updated when an agent observes new evidence.

• The measure of belief in proposition h based on proposition e is called the conditional probability of h given e, written P(h|e).

• A formula e representing the conjunction of all of the agent's observations of the world is called evidence. Given evidence e, the conditional probability P(h|e) is the agent's posterior probability of h. The probability P(h) is the prior probability of h and is the same as P(h|true) because it is the probability before the agent has observed anything.

• The posterior probability involves conditioning on everything the agent knows about a particular situation. All evidence must be conditioned on to obtain the correct posterior probability.

• For example, the diagnostic assistant, the patient's symptoms will be the evidence. The prior probability distribution over possible diseases is used before the diagnostic agent finds out about the particular patient. The posterior probability is the probability the agent will use after it has gained some evidence. When the agent acquires new evidence through discussions with the patient, observing symptoms, or the results of lab tests, it must update its posterior probability to reflect the new evidence. The new evidence is the conjunction of the old evidence and the new observations.

Other cases is, The information that the delivery robot receives from its sensors is its evidence. When sensors are noisy, the evidence is what is known, such as the particular pattern received by the sensor, not that there is a person in front of the robot. The robot could be mistaken about what is in the world but it knows what information it received.

4 views0 comments

Recent Posts

See All

Comments


bottom of page