top of page
Search
Writer's pictureDR.GEEK

An AI police officer watching a hitman aim a gun at a victim

(6th-January-2021)



• If you try to dig into the details of these tradeoffs, you will end up having to understand the AI's social model. But you will not be able to understand your AI's model well enough to give it detailed instructions. A simple high-level instruction like "maximize profits" will cause harm to people and yet it seems hopelessly complex to formulate instructions for your AI to balance the ambiguities of maximizing profits versus minimizing harm.

• One approach to resolving the ambiguities of Asimov's Laws is to instruct AI systems by defining numerical values for each possible outcome. Section 2.1 will explain how such numerical values fit into a mathematical framework for future AI systems. Profit is a value associated with outcomes, as in the Omniscience example, but there are other ways to define values for outcomes that account more generally for human welfare. To make the approach clear, in Figure 2.1 let's simplify and assume there are three outcomes for both the hitman and the victim: they may be shot dead, shot but only wounded, and not shot. Combining these, there are three times three equals nine possible outcomes. Table 2.1 shows one way to assign values to these outcomes. The AI robot can use these values to calculate what it should do.


Three by three table of values for hitman and victim outcomes.


Table of values for AI's actions for hitman and victim



5 views0 comments

Recent Posts

See All

Comments


bottom of page