top of page
Search
Writer's pictureDR.GEEK

Evaluating Predictions -II

(25th-May-2020)


Point Estimates with No Input Features

• The simplest case for learning is when there are no input features and where there is a single target feature. This is the base case for many of the learning algorithms and corresponds to the case where all inputs are ignored. In this case, a learning algorithm predicts a single value for the target feature for all of the examples. The prediction that minimizes the error depends on the error that is being minimized.

• Suppose E is a set of examples and Y is a numeric feature. The best an agent can do is to make a single point estimate for all examples. Note that it is possible for the agent to make stochastic predictions, but these are not better; see Exercise 7.2.

• The sum-of-squares error on E of prediction v is

• ∑e∈E (val(e,Y)-v)2.

• The absolute error on E of prediction v is

• ∑e∈E |val(e,Y)-v|.

• The worst-case error on E of prediction v is

• maxe∈E |val(e,Y)-v|.

0 views0 comments

Recent Posts

See All

Comments


bottom of page