公共政策外文翻译资料

 2022-08-04 02:08

Policy analysis

Public policy began with the systematic analysis of data for governmental purposes. The word statistics derives from state, but policy was not greatly informed by numbers though there were some experiments in the use of statistics from the 1930s through to the 1960s. More occurred after 1960 with the implementation of large-scale government programmes by the Kennedy and Johnson administrations. The size and complexity of the 1960s social programmes led to a demand for better analysis. Mathematical techniques deriving from Rand or the United States Defense Department under Robert McNamara could conceivably be applied to the public sector. It was an age of science. It was an age in which any problem was seen as having a possible solution which could be discovered through the proper application of the scientific method. Related to the belief in solutions was the availability of large-scale computers and suitable software for processing statistical data to levels of great sophistication.

The early period of policy analysis is generally regarded as a failure by being oversold, that is, by assuming that numbers alone or techniques alone can solve public policy problems. It is only from 1980 that Putt and Springer see what they term a third stage in which policy analysis is perceived as facilitating policy decisions, not displacing them (1989, p. 16). As they explain:

Third-stage analysts decreasingly serve as producers of solutions guiding decision makers to the one best way of resolving complex policy concerns. Policy research in the third stage is not expected to produce solutions, but to provide information and analysis at multiple points in a complex web of interconnected decisions which shape public policy. Policy research does not operate separated and aloof from decision makers; it permeates the policy process itself.

Instead of providing an answer by themselves, empirical methods were to be used to aid decision-making. While few of the early policy analysts saw themselves as decision-makers (though it was a charge levelled against them) that was the extent of the analyses used. Third-stage policy analysis is supposed to be a supplement to the political process and not a replacement of it. Analysis assists in the mounting of arguments and is used by the different sides in a particular debate. All participants in the policy process use statistics as ammunition to reinforce their arguments. The collection of data has greatly improved and the ways of processing numbers are better than before. However, whether or not third-stage policy analysis is so different from early policy analysis will be considered later.

Empirical methods

Much has been said in passing of the empirical methods and skills needed by policy analysis and policy analysts. In one view, two sets of skills are needed. First, scientific skills which have three categories: information-structuring skills which sharpen the analysts ability to clarify policy-related ideas and to examine their correspondence to real world events; information-collection skills which provide the analyst with approaches and tools for making accurate observations of persons, objects, or events; and information-analysis skills which guide the analyst in drawing conclusions from empirical evidence (Putt and Springer, 1989, p. 24). These scientific skills are not independent but rather interrelated; they are also related to what they call facilitative skills (1989, p. 25) such as policy, planning and managerial skills.

So, while empirical skills are needed, there are other less tangible ones needed as well. Both sets of skills point to the emphasis on training found in policy analysis. If analysts inside the bureaucracy can be trained in scientific skills and facilitative skills, the making of policy and its outcomes should be improved.

Some of the empirical methods used in policy analysis include: (i) benefit-cost analysis (optimum choice among discrete alternatives without probabilities); (ii) decision theory (optimum choice with contingent probabilities); (iii) optimum-level analysis (finding an optimum policy where doing too much or too little is undesirable); (iv) allocation theory (optimum-mix analysis) and (v) time-optimization models (decision-making systems designed to minimize time consumption) (Nagel, 1990). In their section on options analysis - which they regard as the heart of policy models - Hogwood and Gunn point to various operations research and decision analysis techniques including: linear programming; dynamic programming; pay-off matrix; decision trees; risk analysis; queuing theory and inventory models. How to cany these out can be found in a good policy analysis book. They are mentioned here for two reasons: first, to point out that there are a variety of techniques and second, that they share an empirical approach to policy.

As probably the key person involved in developing mathematical approaches to policy issues, Nagel is naturally enthusiastic about their benefits, arguing that policy evaluation based on management science methods seems capable of improving decision-making processes (Nagel, 1990, p. 433):

Decisions are then more likely to be arrived at that will maximize or at least increase societal benefits minus costs. Those decision-making methods may be even more important than worker motivation or technological innovation in productivity improvement. Hard work means little if the wrong products are being produced in terms of societal benefits and costs. Similarly, the right policies are needed to maximize technological innovation, which is not likely to occur without an appropriate public policy environment.

One can admire the idea that societal improvement can result from empirical decision-making methods. There are undoubtedly some areas in which the

剩余内容已隐藏,支付完成后下载完整资料


英语译文共 3 页,剩余内容已隐藏,支付完成后下载完整资料


资料编号:[264286],资料为PDF文档或Word文档,PDF文档可免费转换为Word

原文和译文剩余内容已隐藏,您需要先支付 30元 才能查看原文和译文全部内容!立即支付

以上是毕业论文外文翻译,课题毕业论文、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。