RESPONSE TO APS REVIEW ‘PRIORITIES FOR CHANGE’
Michael Keating, AC
I welcome this opportunity to respond to some of the ideas and suggestions for change that the APS Review has put forward in its Interim Report. While that Report necessarily has a broad REDACTED, my response makes no pretence to be comprehensive. Instead I wish to focus on a couple of areas – mainly to do with policy development and coordination – about which I have some particular knowledge and experience that might prove useful to the Review. Furthermore, I have seen the response provided to the Review by Andrew Podger and Helen Williams, and except for their discussion of the respective responsibilities of the Heads of PM&C and the Public Service Commission, I am generally in agreement with their views.
The Policy Capability of the APS – How good is it? How well is it used?
My starting point is the Review’s finding, with which I agree, that:
“There are strong concerns that the APS’s underlying capacity has been weakened over time.” (Review: p.14)
“the APS’s capability has diminished over time, that there is too much unused potential, that specific skills gaps have emerged (for example, data capabilities), and that the APS’s bench strength is not what it once was.” (Review: p.38)
I am especially concerned that the Review (p. 14) has found that:
“too often, successful leadership within the APS is more associated with responsiveness and upward management than with employee development, entrepreneurialism and stewardship of the service.”
In addition, I was struck by the following two comments by APS staff members that the Review (p.36) has chosen to cite:
“We have many ambitious, capable people not reaching their full potential (they are frustrated, bored).”
“I have observed a gradual erosion of specialist expertise. This affects the ability of the APS to provide quality policy advice, regulatory oversight and services.”
And I note that the Review (p.38) has found that:
“Lack of career development and progression being the primary stated reason people leave the APS”.
My conclusion – which I suggest these findings support – is that the APS has become too concerned with managing upwards and too risk averse. That has led to too many senior managers in the SES, with middle-level professional staff having insufficient responsibility for providing policy advice and feeling that their skills are unused and unlikely to develop further.
In addition, the fetish of the present Coalition Government with keeping staff numbers down has led to work being contracted out that would once have been done by in-house professionals. It is yet to be demonstrated that this type of contracting out has led to budget savings, but it has left the remaining professionals in the APS feeling that they have become contract managers and that they don’t have sufficient opportunity to engage directly in the research and evaluation that should underpin policy development and review.
As the Review (p.15) notes: a key issue for the future of policy advice and its implementation is the balance that the APS should strike between remaining responsive to the government of the day and to act as custodian of the range of functions and institutions that endure from government to government. Indeed, I would go further and argue that the APS has a responsibility to argue on behalf of the public interest and good policy, while recognising that the elected government of the day should always be the final arbiter of what is in the public interest. But that still means that the public service has a responsibility to ensure that the government is fully and properly informed before it makes its decisions.
As I see it, the capacity of the APS to provide good policy advice will continue to atrophy unless better use is made of that advice. As the Review (p.36) puts it a transformed workforce able to undertake deep research, evaluation and data analytics, will be critical to integrated policy approaches that take a strategic view of Australia’s interests across economic, social, security and international domains. But first, and most important, it will be necessary to ensure there is a demand for such capability if it is to grow and prosper.
In that context, I question how much will be achieved by the Review’s proposals to Invest in capability and talent development as set out on pages 38 to 40. Of course, these proposals are worthy, but will they make much difference unless there is a genuine willingness to make full use of the professional capabilities of APS staff by ensuring that in future policy is always evidence-based? Furthermore, in my view, that would require making proper evaluation mandatory, as I discuss in the next section of this submission.
Program and policy evaluation: the key to better policy and better use of skills
I want to begin by acknowledging the discussion on p.41 of the Review’s Interim Report where the need to develop the research, evaluation and data analytics capacity in the APS is properly recognised. What is now needed is to determine how best this need can be met.
In this regard, I would strongly recommend the excellent report commissioned from ANZOG by the Review on Evaluation and Learning from Failure and Success. What I will attempt to do here is to provide my perspective on some of the issues and questions raised in the ANZOG report.
Fundamentally, the development, implementation and management of public services and programs should be guided by systematic efforts to determine ‘what works’ and ‘what is required to make it work’. This is why evaluation is so critical. Indeed, it is the capacity of the APS to engage in this analysis that provides the main rationale for having a separate professional and independent public service.
Unfortunately, however, as the Review acknowledges (p.41): “applied research functions across the APS have diminished over time”. Similarly, ANZOG (p. 8) found that:
“it has been observed repeatedly that the APS currently does not learn well from experience, that its approach to evaluation is piecemeal both in scope and quality, and that this diminishes accountability and is a significant barrier to evidence-based policy-making”, and
“the heart of this problem is not a lack of skills and capacity of public servants. Rather it is a product of cultural practices that have evolved within the APS, and of the environment in which the APS operates”.
“One of the key impediments to effective evaluation within current structures and approaches [identified by ANZOG, p.9] is a reluctance of ministers to allow for scrutiny of policies and programs due to such critiques”. Clearly this is a difficulty, but in the past – for example during the Hawke-Keating Governments – Australia did mandate systematic program evaluation and that did achieve significant benefits for public policy.
As ANZOG (p.9) points out this evaluation:
“provides a strong basis for decision-making on whether to maintain or modify programs, and, by being embedded in the learning framework of departments, should enhance policy advice, and program development and implementation. It would also enable stronger engagement with stakeholders, the wider community and the electorate. It would enable them to more concretely demonstrate the public value, and value for money, of government initiatives, and to explain why some activities have been terminated, or particular strategies not advanced.”
Accordingly, ANZOG concludes what is now needed is “an institutional framework that firmly embeds the strategic importance, and processes, of institutional learning”. Unless government decision-making mandates the use of evaluation and research capabilities these will inevitably erode.
Based on the suggestions by Podger & Williams in their submission (p.11), I therefore propose that the following more systematic requirements for evaluations and program monitoring should be reintroduced:
All new policy proposals in Cabinet submissions to include evaluation evidence that supports the proposal;
Identification of the processes by which the measure is to be evaluated if agreed upon;
All portfolios to have evaluation plans agreed with Finance that cover all portfolio programs and policies, with an expectation that programs will be subject to evaluation on a rolling basis every three years,
In order to ensure the quality and independence of each evaluation, Finance should have the right to be represented on the panel overseeing any evaluation that it nominates, and
Program performance data should be agreed with Finance and publicly reported annually, including for those programs whose delivery is outsourced.
Hopefully these requirements will change the culture and behaviour around government decision-making, leading to better policy as well as further developing and making better use of the capabilities of the APS. I believe these requirements for evaluation achieved that end in the past, and there is no reason why they cannot prove just as successful in the future.
The Role of the Department of Finance
A key issue raised in the ANZOG Report (p. 11) is the balance between centralisation and decentralisation of the evaluation function, and who should be responsible for coordinating evaluation in the future.
In my view, if agencies’ policy advising skills are to be properly developed and used then the main responsibility for evaluating their programs and policies must lie with the individual agencies themselves. Nevertheless, to ensure the adoption of best practice and to guard against backsliding, the requirements proposed above assume a central role for the Department of Finance. Giving this role to Finance would ensure the integration of the development of evaluation strategies into the policy/program development process, including funding approval, and is consistent with the responsibility that Finance has always had as the government’s principal adviser on the expenditure side of the budget.
Furthermore, I therefore do not see the need to create any new body such as some so-called “Evaluator General”. Any independent statutory agency, possibly reporting to Parliament, would most likely be reporting ex post, and would not be involved in the development of policies and therefore would not be capable of working closely with the rest of the APS. In addition, I am also sceptical that appointing some head of the evaluation profession will achieve anything useful, given the proposed responsibilities of the Department of Finance, and especially its Head.
One legitimate concern about these proposals to lift the quality and use of evaluation within the APS is whether Finance itself has the quality to adequately perform the role that is here proposed for it. On the other hand, there is nothing in this role for Finance that it hasn’t had in the past, and I would contend that it was widely recognised to have performed that role well then. If, however, the Review has any doubts about whether Finance is presently adequately equipped to carry out its past responsibilities, then I suggest that the Review might consider what more is needed to restore Finance so that it can adequately fulfil its former role: for example, a demonstrated capacity to drive an evaluation agenda might be a key requirement for appointment as the Head of Finance.
A greater capacity to Invest in long-term projects
One other issue that I would like to raise under the rubric of ‘evaluation’ is the call by the Review for ‘a greater capacity to invest in long-term projects’. Frankly I am surprised by this recommendation as I am not aware of any evidence that more such investment is needed, and certainly the Review provides no such evidence.
Instead, the evidence shows that Australia massively over-invests in infrastructure. For example, the Grattan Institute has demonstrated that almost all the projects that have been announced for at least the last two elections have never been subject to cost-benefit analysis, and the obvious reason why not is because the proponents know that the projects would never stand up to serious evaluation. In addition, it is impossible to say that there is a shortage of infrastructure while the use of that infrastructure is not properly charged for – indeed, simple common sense would tell the Review that if something is provided free there will very likely be excess demand for it.
Accordingly, I strongly suggest that the Review is doing a disservice by recommending that Australia needs to invest in more infrastructure. Instead, if the Review wants to say something useful about infrastructure investment then it should recommend proper evaluation and charging for all infrastructure projects so that they can be assessed on their merits and are not agreed politically, purely to satisfy this or that interest group.
Balance between the centre and individual agencies
The Review starts with a good discussion of the future challenges likely to face Australian governments and the APS serving them. One conclusion from this discussion is the need for more ‘joined-up government’, or as the Review (p. 14) puts it:
“There is widespread agreement that the constituent parts of the APS will need to work together on these challenges and to realise the opportunities provided by breakthrough technologies and better engagement.”
I don’t disagree with this conclusion, nor with many of the ideas that the Review has floated to improve APS collaboration – although the devil will be in the detail. However, the issue I wish to raise is what changes if any might be made to the departmental structures which have always governed the operation of the APS.
More specifically, the issue that concerns me is how much should responsibility for reform of the APS and its ability to respond to future policy challenges be centralised. The Review (p.14) seems to have formed the conclusion that:
“devolution has empowered agencies – but has made it more difficult for the APS to tackle the interconnected challenges Australia will face in coming decades.”
The Review (p.34) also alleges that
“Some aspects of the Budget process can have unintended consequences for APS collaboration, experimentation and long-term thinking. This risks undermining both the quality of advice to government and the implementation of government decisions. A cross-portfolio approach to allocating resources and prioritising investments presents an opportunity to deliver better outcomes for all.”
I do not know on what basis the Review reached these conclusions, nor what the Review is proposing instead. However, I would strongly dispute the suggestion that devolution weakened the capacity of the APS to respond to cross-portfolio challenges. Neither is it clear to me in what way the budgetary process inhibits the allocation and expenditure of funds consistent with the government’s priorities.
As someone who was Secretary of a department before and after these New Public Management (NPM) changes were introduced, I would argue precisely the opposite. There used to be a “silo” mentality across the APS but it largely disappeared as the culture of the APS changed during the 1980s and 1990s. The main reason for this change in culture in favour of much better collaboration was not, however, the NPM reforms per se. Rather the key change was that staff no longer expected to remain in one department all their working careers; instead staff were encouraged to move around. And most importantly in this regard, previously all Secretaries of Departments had spent all or almost all their whole careers in their one department and did not expect to work anywhere else, and this made them very defensive of their territory and unable to think laterally. But in the 1980s Secretaries were generally appointed from outside the department, and furthermore they had a strong expectation that they would be moved after five years – most likely to a related department – and that made them very willing to collaborate with those related departments.
I would also like to remind the Review that the 1980s and 1990s are widely regarded as a stellar period for good policy reform in Australia. Much of this reform agenda and certainly the details of these reforms were largely developed by the then APS, which was perfectly capable of handling cross-portfolio issues.
The reality is that each of the portfolios in the Australian government corresponds to a major government function and these functions have been unchanging across all nations for around a century or more. I question whether most people expect their health services to be coordinated with other government services such as transport or security. While on the other hand, many of the most important issues for health policy, such as the cost of health care, better coordination of services to chronically ill patients, or better health prevention are best handled by that portfolio.
Realistically the management of these different health issues does require devolution within the Health portfolio without much intervention by central agencies. As Podger and Williams say in their response to the Review, any assessment of capability needs to recognise the importance of subject matter expertise. Similarly, the span of management control for the delivery of services requires some specialisation and should not be too broad. Some delegation to and within departments will therefore be necessary. Indeed, as I have argued above in my discussion of evaluation, what the APS is most missing is a loss of expertise, primarily because that expertise is not used well.
Of course, there are from time to time issues that do require cross-portfolio collaboration, but I would contend that this can happen within existing structures. For example, I understand that the development of the NDIS cut across the responsibilities of a few portfolios, but by all accounts this was managed satisfactorily. Similarly, there are long-standing arrangements for the coordination of national security matters. In addition, in my experience, it is always possible to second experts from relevant agencies to a specially established taskforce when the response to a policy problem required a multiple agency approach.
In sum, I agree with Podger and Williams that:
“There is little benefit in trying to connect everything to everything else all of the time, and a real risk of blurred accountability. There is a range of structures and processes appropriate for different ‘connected’ problems, and there remains considerable benefit in having stable organisational structures for ongoing functions that can be drawn upon as needed.”
Machinery of Government Changes
The Review (p. 31) finds that:
Machinery of Government (MoG) changes are a principal means for governments to align APS functions around their priorities. Over the past 20 years, the APS has undergone more than 200 MoG changes.
The inference the Review then seems to draw is that the frequency of these MoG changes is evidence that present structures are insufficiently flexible to meet changing government priorities. The Review (p.31) then seems to think that there is some
“Truly dynamic operating model [that] will reduce the need for MoG changes – and when they are needed they will be cheaper, quicker, and more efficient.”
Frankly I remain sceptical. As far as I can tell the Review has yet to reveal what this ‘truly dynamic operating model’ is, but I doubt that it will change much. In my experience, the reason for MoG changes is almost always political. Sometimes they are intended to give greater prominence to a particular new initiative or to satisfy some minister’s aspirations, but I think a proper examination of the evidence from past changes would show that the MoG changes themselves rarely if ever make any difference to the amount spent nor what is achieved.
I was one of the small group of people who organised the MoG changes in 1987 which significantly reduced the number of departments and created what were then called mega-departments. We believed we had put in place a structure that would prove to be sufficiently flexible and adaptable that no further changes would ever be needed. While we clearly failed to prevent further change, I would still maintain that the present structure of departments is sufficiently comprehensive that it can readily accommodate shifting priorities while also ensuring the necessary coordination of closely-related functions.
In short, I don’t think creating new institutions will make the APS more effective. It is not so much the institutions and governance arrangements that need to change; rather it is the culture.
If the APS is to return to being a trusted and valued source of policy advice it needs to restore its policy capabilities. And in my opinion that will require much more focus on researching and consulting on what are the emerging issues and evaluating what works and why. The challenge will be to get the support of the APS leadership and the government of the day for this type of change, given the risk of failures being exposed. The counterpart is that successes can also be celebrated and hopefully people’s trust in government will also be enhanced by greater honesty in making available objective and rigorous assessments of performance.