Home > The Panel's Priorities > 21st century delivery, regulation and policy capabilities

21st century delivery, regulation and policy capabilities

There is an opportunity to reprioritise time, effort and resources for deep research, analysis and evaluation, and big data and analytics, to underpin APS capacity to provide the highest quality advice to governments.

Survey now closed

Terms of Use

What we think is needed

  • Explicit acknowledgement – in agency planning, resourcing and reporting – of the importance of research, evaluation and data analytics in policy development and delivery.
  • Additional and ongoing resourcing to: build in-house research capability; sustain existing evidence-gathering tools and agencies; proficiently commission external research; and develop necessary digital talent and skills, particularly in data analytics and emerging technologies.
  • Evaluation capability and practices embedded across the APS, supported by central enabling advice and consistent methodologies, with specific requirements to undertake evaluations of major measures.
  • Experimentation with new models to challenge and disrupt traditional approaches to developing policy, regulation and services (for example, time-limited special purpose units).

What is shaping our thinking

  • The analysis and findings of the ANZSOG paper ‘Evaluation and learning from failure and success’ by Rob Bray, Matthew Gray and Paul ‘t Hart.
  • Public policy discourse on the role of evaluation in improving policy, and international experience in embedding evaluation. For example, the Government Accountability Office in the United States of America, and the What Works Network in the UK.
  • Global experience in the use of data analytics in policy development, including to simulate the impacts of proposed policy changes.
  • Feedback that applied research functions across the APS have diminished over time.

What we are still exploring

  • How best to overcome the understandable reluctance to identify, accept and act upon potential findings flowing from evaluations.
  • Options for the design and use of the ‘professions model’ in these areas.

Comments

Mon, 29 Apr 2019

There should be some wariness about locating any centralised evaluation or delivery functions within the central agencies - who completely lack the practical skills for this, preferring to tak about the concepts and theories, rather than the real practices of devising outcomes based policy and determining whether those policies are working to achieve economic, social and environmental outcomes.

far better for such a central function to be located independently or with the Productivity Commission with a strong legislative mandate.

ALso think it is worth reflecting on Nicholas Gruen's thinking here - such a centralised role shoudl not be solely about conducting evaluation but more importantly on capability building and assurance.


Thu, 25 Apr 2019

It's not only these functions that need reprioritising, but, showing that we truly value, support, get on board with, keep barriers away from, and give enough scope and room to the people who drive and lead this type of work. Otherwise, it is very difficult for them to do what they need and want to for the APS and it's stakeholders.


Wed, 17 Apr 2019

Just do it


Sun, 31 Mar 2019

Providing better supply of policy work is all great, but unless met with a demand, may not be naturally sustainable. I would strongly suggest that business case templates for funding and cabinet submission processes be modified to both require some demonstration of better policy efforts, because many people will take the shortest route to secure funding. Many behaviours are therefore driven by how funding works, so agile budgeting is also required, if you want to move the APS beyond waterfall behaviours. The moment you are asking for 4 or 5 years worth of predictions for spending, you are creating a culture that doesn't easily adapt to change (or reality for that matter), and that is punished financially when experimentation of an approach proves it to be not quite right.

Perhaps include in these processes the requirement to provide evidence of the research and testing phase of the proposal, including lessons learned from public engagement on the idea. You could introduce agile budgeting by having a lean and very quick budget application process for Discovery and Alpha phases of new policy positions, programmes or services, but a full funding proposal is only made on the back of a successful Discovery and Alpha (which may include several pivots therein). If full funding happens at this later phase, there may be a lever to encourage more openness in the early design phases, which creates greater confidence in engaging the public and other sectors throughout Discovery and Alpha.

Fundamentally, if we want better policy outcomes, we need to be more open, iterative and able to explore what is the good way, not just the expedient way.


Sun, 31 Mar 2019

  • A Policy toolkit that defines clearly what good policy is (including the diff between Policy, operational rules, guidelines, etc) & the techniques of modern policy makers, including Human Centred Design, iterative prototyping approaches, true codesign with communities & multi-discplinary approaches that draw together policy, drafting and implementation into the same room.
  • Annual "Policy Futures" event that brings together all major policy units across government(s) to share insights, identify gaps and emerging trends, understand the changing needs/values of the communities we serve, and explore big policy ideas openly and with input from the public. This creates a continual pipeline of policy big ideas for the government of the day to consider, as well as an opportunity for the public service(s) to get convergence of policy efforts and minimise the current widening gaps across portfolio lines. It also gives a chance to continually reimagine better policy futures, to encourage formative mindsets in our policy profession.
  • Test driven legislation/regulation that is human and machine readable from the start, with api.legislation.gov.au established as the authority for machine consumable prescriptive rules (leg/reg) for easier testing and faster deployment (imagine the human readable form being enacted in parliament and available as an API the very same moment). This is critical for greater consistency of applied leg/reg, as well as for traceability, accountability, trust & the ability to appeal.

I would suggest this is one of the most critical areas, but is dependent on a highly confident (& brave), supported, capable & apolitical leadership for it to be successful. And it isn't just about advice to governments, it is about the public sector ensuring public good.


Fri, 29 Mar 2019

Program evaluation and its place in demonstrating APS program effectiveness needs to be a core skill of the SES and others with senior management responsibilities.

Showing that a program is effective also provides feedback on staff performance and morale. Many of the comments on this site have named "serving the public" as their reason for coming to work. Program evaluation is an essential means of demonstrating how well this is being done.

Evaluation is currently be an adjunct to the PGPA Act's non-financial performance reports. See DoF's article - Morton, D., & Cook, B. (2018). Evaluators and the Enhanced Commonwealth Performance Framework. Evaluation Journal of Australasia, 18(3), 141-164. However, it should not be limited to those statements.

Nicholas Gruen's proposal of an independent Evaluator-General has much merit. Currently, the practice of evaluation is subject to the priorities of individual Secretaries. Three examples are: (1) formerly at the SES 3 level in Immigration under Martin Bowles; (2) currently an SES 2 in Industry; yet (3) at the low level of EL 2 in DSS.

The functions of evaluation should be part of the "professionalisation" of staff streams being considered. Corporate responsibility for evaluation also needs to be removed from individual Secretaries and anchored in a central, independent function (such as the ANAO).


Thu, 28 Mar 2019

Having evaluation of programs and projects by agencies would appear to be something that should be BAU. The thing that needs to be avoided is that often internal evaluation takes the assumption of the agency that the program has achieved its outcomes. External, independent evaluation is still required to add rigour and challenge group think. The other obvious is ensuring consultation and design think when planning programs to ensure as many viewpoints as possible are considered.


Thu, 28 Mar 2019

Another thing to consider - It would be good for any evaluation framework to explicitly include the role of central agencies in the policy lifecycle. e.g. when something does or doesn't work: what advice did central agencies - especially PM&C provide? how did they influence (or not) the design? how did they promote (or not) connections between different policy interventions?...etc.

With respect, central agencies seem to get off scot-free when something goes wrong, enabling them to preserve their image of superiority, whilst simultaneously degrading their ability to learn and improve their systems of assurance.


Mon, 25 Mar 2019

the ANZSOG paper ‘Evaluation and learning from failure and success’ - in its section " New policy and program development, approval and implementation" explicitly calls for Program logic to be incorporated into the policy development/decision making process. It would be good for this to be lifted into the body of the Review Report.

Too often we see poor policy start its life as a poorly articulated submission or NPP that doesn't clearly set out what it's trying to achieve. Program Logic would help to address the key issue which occurs early in the policy development lifecycle.

The central agencies which own the submission and NPP templates - seem to not have recognised this deficiency or followed other better practice e.g. Victoria Govt which uses Investment Logic Mapping to describe the intent of everything from policy to investment in a community event.

It would also be nice to see something in the report about the role of Central Agencies. Having worked in two central agencies - and in particular PMC - there does not seem to be much clarity of what the role should there seems to be resort to becoming a quasi-enforcement entity rather than a proper role around policy assurance (PMC), financial assurance (FInance), Fiscal/economic assurance (Treasury). In many organisations there is a very clear role for assurance and the product of assurance for the decision-maker (in govt this would be Cabinet) is equally clearly developed.

the effect of the lack of a clear assurance role for central agencies should not be understated. Given the influence in the decision process - this can result in poor policy getting up and good policy being suppressed based on central agency whim.


Fri, 22 Mar 2019

In order to figure out the conundrum of evaluation - I think it would be worth thinking about who "owns" the evaluation of a policy intervention.

We tend to be quite comfortable saying that it is the GOvernment of the day that owns the policies/policy making through its cabinet processes. but quite bizarrely in some repsects we seem to consistently devolve all subsequent policy activities to Departments, including in particular the evaluation.

Evaluation really sits at the same level as policy making - it is the assessment to government of whether "their" intervention is achieving "their" objectives.

The mechanics of evaluation could be put anywhere e.g. centralised or decentralised. but it would be good to seek mechanisms to embed evaluation at the Cabinet level - e.g. as part of the Cabinet Handbook. Or through some sort of legislative mandate (noting that the Cabinet has no legislative basis only convention)

The alternative is of course like the auditor-general to have reports to parliament, but I wonder whether this still has the issue of shifting ownership of the policy and its evaluation away from the government of the day.