Home > Your ideas > Submissions > Dr Martin Dunn

Dr Martin Dunn

Submission: 

Submission attached.

Redacted
Document: 
File Download (26.99 KB)
Automatic Transcription: 
Conduct of the APS Review

This submission relates to the methodology used by the APS Review to inform its findings. In it I address some issues involved in public reviews and propose some approaches. My arguments are mostly based on my experience as a staff member of the National Commission of Audit (2013-14) and the Defence Management Review (2007).

The Review has two broad task: identifying where the public sector performance achieves better practice or falls short (the diagnostic problem), and identifying initiatives that could result in an overall improvement in public sector performance (solution identification). Typically, public reviews rely heavily on a program of interviews and public submissions to inform both tasks.

A widespread program of interviews will typically capture the views of senior decision-makers (in this case, I expect it will include agency heads, heads of human resources, and selected ministers or their senior staff). While these views are clearly important, they suffer from two main problems. Firstly, the senior perspective is disconnected from the problems occurring at the operational level of the organisation. It can be many years since they were working as a lower ranked public servant (and some may never have). The information they receive is often filtered through other senior executives and they can receive a level of service and responsiveness that is atypical of what is happening in their organisation. One Secretary was happy to tell the staff of the department how good the new IT system was, when the typical user was experiencing daily crashes and long waits calling the help desk.

A program of public submissions can result in a large volume of material to sort through. There is often no easy way to determine which submissions should be considered expert opinion and which could be regarded as cranks. Typically, the volume of submissions is such that the Panel members rarely look at more than a handful of submissions. More normally they are reliant on the review staff to screen the submissions and identify any that are particularly useful.

The problem with the mass of unstructured information in transcripts and public submissions is that it is depended on the expertise of the Review panel and the staff to identify what is relevant. But this is highly susceptible to confirmation bias, as information that fits with preconceived notions is seen is inherently more reliable than information that does not.

Unfortunately, objective data is in short supply. Annual reports are meant to show agency performance, but these typically list particular deliverables completed in the previous year and provide little insight into where they had any effect. The annual APS staff survey provides some information but it is not structured as a diagnostic instrument, and misses agencies that are not staff by Public Service Act employees. Miscellaneous reviews provide some insights and case studies (The APS Capability Reviews are amongst the better examples, but this program appears to have been discontinued. See https://www.apsc.gov.au/capability-review-program). Ultimately, most of the relevant knowledge is in the heads of people experiencing public sector activities.

I would recommend that the Review consider use of workshops/focus groups and surveys to gain evidence for both diagnosis and solution identification. The approach could involve:

Step 1. The Review Panel and staff should identify around a dozen major themes through brainstorming. My indicative list is:

Policy development

Service delivery

Staffing, recruitment and promotion

Staff development and training

Information and communications technology

Facilities, location and the physical work environment

Administrative and support functions

Internal communications and knowledge management

Contracting and procurement

Financial management

Performance and reporting

Step 2. Identify a reference group for each theme. I would see that this would be about a dozen people, mostly from across the public service (a balanced mix of those who are directly involved in providing the particular function and those who are customers of it), with the inclusion of some outside experts – academics, and representatives of professional bodies or advocacy groups.

Your consultation may identify such people, or you may seek nominations.

Step 3. Have each group brainstorm the issues within each theme and identify some hypotheses.

Step 4. Use sample surveys of the APS (and if relevant APS customers) to test each hypothesis. This should provide a clearer diagnosis and may indicate what solutions appear feasible.

Step 5. Using the survey data, invite each reference group to assess the appropriate responses to the issues. (Rather than brainstorming for this step, I would suggest something more structured like nominal group technique.)

This approach would allow the review panel to more readily access consensus expert judgements on the performance of the public sector.

Dr Martin Dunn

Redacted

This text has been redacted: Date redacted