Article Writing Homework Help
I need help creating a thesis and an outline on The Necessity of Building a Machine That Is Capable of Distinguishing Right and Wrong. Prepare this assignment according to the guidelines found in the
I need help creating a thesis and an outline on The Necessity of Building a Machine That Is Capable of Distinguishing Right and Wrong. Prepare this assignment according to the guidelines found in the APA Style Guide. An abstract is required. Artificial moral agents (AMAs) does not only prove to be necessary but to some extents inevitable. The recent mushrooming of robots almost in every sector in our nation is a clear demonstration of the market pool for such innovations. Additionally, the freedom provided for their development talks no less, as Rosalind Picard, Vyzas, and Healey (1180) so aptly put it, “The greater the freedom of a machine, the more it will need moral standards.” This is because the increased innovative technologies of sophisticated systems such as those with moral capabilities require that they are fitted with abilities for moral decision making. Even though Colin Allen and Wendell Wallach acknowledge that at present, there are a few chances of an artificial moral agent that can be compared to human beings, there are future possibilities of such systems. This is because complex and more pervasive stems such as automated aircraft and automated weaponry have demonstrated the transfer of human-decision making capabilities to machines.
These systems like the automated aircraft, the authors explain, are developed to observe some levels of accuracy. For example, autopilots designed to observe some degree of turning radius can be regarded as having some moral functionalities. This is because these autopilots implement some ethical concern for passenger’s safety and comfort. To ensure this standard, most of the autopilots are designed to observe a specific turning degree that will not frighten the passengers or make them uncomfortable. To some level, this can be regarded as an AMA that is programmed to attempt observing some moral considerations.
What measures can ensure the development of a fully functional artificial moral agent? In order to develop an artificial moral agent, first, there is the need to bridge philosopher’s thought of abstraction and Engineers beliefs in the buildable designs. This is because theories are capable of informing designs just like various designs can formulate theories. The implementation of these moral capacitates in robots as indicated by Wendell Wallach lies within two broad approaches. top-down imposition on ethical theory and bottom-up development of systems that can improve their own performances irrespective of these standards specified theoretically. The difference between these approaches is that top-down imposition on ethical theory method depends on already formulated rules. Example of such rules includes. Golden Rule and the deontology of Kant’s categorical imperative. These rules aid during the development and to later determine the performances of these AMAs. Contrary to this opinion the bottom-up development method employs an evolutionary and developmental process in the development of AMAs (Colin Allen and Wendell Wallach, 58). If followed correctly these approaches can aid the development of an efficient AMA, for example, the top-down approach enables a broad definition of ethical values. This extended scope will enable these AMAs to make a variety of ethical decisions and cover countless situations. Moreover, the bottom-up approach is capable of dynamically integrating inputs from distinct systems. This kind of automation will make these AMA reliable and more cost-effective. . .