Home > Your ideas > Submissions > Dr W. Jarvie and Dr T. Mercer

Dr W. Jarvie and Dr T. Mercer

Submission: 

We are two former SES officers who are now academics involved in teaching public policy courses to APS officers as well as researching and publishing on government and public policy.

This is an important review. The Australian Public Service is a vital institution and its officers, their capabilities, values, priorities and modes of behaviour as well as the structures they work in, are essential to the good functioning of the Australian society and economy.

Given this, it is disappointing that so little time has been allowed for public submissions to an inquiry with such broad terms of reference.

In view of the limited time, this contribution focuses on just two elements:

  1. the necessity of placing the building and the maintenance of public trust at the heart of the role of the Australian Public Service (APS).
  2. the importance of ensuring sufficient skills and resources are provided in the APS to manage and oversight outsourced functions which are increasingly central to how government delivers services to citizens.

Public trust

The role of the APS in maintaining public trust is not mentioned in the TOR for the review. This is most disappointing. The decline of public trust in government is a central problem for our country (and indeed most democratic countries). While trust is driven by many factors, including behaviour of elected representatives and political parties, the openness and transparency of government decisions and actions, the role of the media, as well as the living standards and expectations of the public, the review should not shirk the issue and the role that the APS can and should play in restoring trust, or at least averting its further decline. We would argue that all changes to the APS flowing from this review – in terms of capability, culture and operating model – need to be tested against the rubric of how this will increase public trust in the APS, its officers and government as a whole, and by how much. This extends to assessing the impact of any changes likely to affect the public trust in, and independence of, government institutions such as the ABC, Ombudsman and the Human Rights Commission, as well as key financial and regulatory agencies, and policing and security agencies . This also extends to how outsourced functions are managed so that they enhance citizens’ trust in government, and not undermine it - through for example a lowering of service standards or fairness, including any public perception of unfair or unequal treatment of the more disadvantaged groups in the Australian community.

Outsourcing – making it work

As a result of New Public Management reforms over several decades, outsourcing of government functions to the private and community sectors is widespread and has, in some areas, delivered significant savings and improved results for successive governments. In delivering public policy, there is now most often a ‘three sector solution’, in that the government sector works in close collaboration with not-for-profits and business. When such key government functions are outsourced, they are expected to deliver “value for money” ie through efficiency savings and effectiveness increases. As noted above we would argue that that there should also be a test as to whether they are likely to enhance citizens’ trust, and how this is expected to occur.
We also consider that it is crucial for governments to recognise the linkage between sufficient internal resourcing for APS agencies and APS capability, and successful outsourcing. To illustrate this, we have attached a paper which was delivered in an international forum and published last year on the outsourcing of Australia’s national employment services and its operation over the 1998-2012 period. This major reform to employment service delivery has been recognised as highly successful and has been quoted regularly by the OECD and studied by other countries.

To cut a long story short one of the key elements to its success was its heavy investment in monitoring and evaluation which was carried out within the Department of Employment by highly skilled and highly analytical teams, and the preparedness of government and senior departmental decision-makers to take action informed by that evidence. For example, the early results showed that tendering for services based principally on price did not deliver good results for unemployed people – this was very quickly dropped and subsequent tenders have all been done on the quality of services provided and actual outcomes achieved for jobseekers.

In their recommendations to government through Budgetary processes such as the Expenditure Review Committee, central agencies such as the Departments of Finance and Prime Minister and Cabinet need to recognise that significant outsourcing activities need to be funded to undertake such critical monitoring and evaluation activities. Similarly, line departments need to ensure that they invest in the analytic capability of their staff and in data collection and analytics to ensure they are able to actively support and manage the outsourced function. This will help to ensure that “intelligent outsourcing” becomes the norm, not the exception.

Dr Wendy Jarvie

Adjunct Professor, School of Business

University of NSW Canberra

Dr Trish Mercer

Visiting Fellow, Australia and New Zealand School of Government

Australian National University

Document: 
PDF icon Download (229.11 KB)
Automatic Transcription: 

13
Australia’s employment services,
1998–2012: Using performance
monitoring and evaluation to
improve value for money
Wendy Jarvie and Trish Mercer

The reform of employment services delivery

in Australia

Australia, together with the Netherlands, has been recognised as a world

leader in the introduction of market competition for the provision

of employment assistance to unemployed jobseekers. Yet as Struyven
(2004: 3) has observed, the creation of a quasi-market in employment

service provision is not a simple choice for government and requires a

continual and complex ‘balancing act’ between government regulation

and creating sufficient room for market competition, and also between

the goals of efficiency and equity. This chapter investigates the intensive

evaluation and performance monitoring processes that the Australian

Government invested in and utilised over the 15 years from 1998 to

support the development and fine-tuning of the market delivery of

employment services, and to drive continual improvement in value for

money.

277
Value for Money

In the early 1990s, a period of experimentation had begun in the delivery
of employment assistance, which is a national government function in
Australia. The Labor Government of Paul Keating had moved beyond
the traditional provision of such assistance by its public provider
(the Commonwealth Employment Service) to encourage contestability in
employment services, including an innovative case management approach
for the long-term unemployed delivered by the community sector and
private contracted case managers and a billion-dollar investment in
training programs under the Working Nation program (Davidson and
Whiteford 2012: 53). By 1995, the last year of the Keating Government,
the annual cost of employment and labour market assistance programs
was over $4 billion. Following the election of John Howard’s Coalition
Government in 1996, what was seen as a more radical experiment was
introduced, in May 1998, which involved the Department of Employment1
contracting a Job Network of community-based and private providers
who would provide employment assistance to unemployed jobseekers and
also employers (Thomas 2007: 1–2). While it delivered significant budget
savings, this reform, the government contended, would address known
deficiencies in the current provision of employment assistance, which
had not achieved any significant difference in getting the unemployed
into regular employment, while retaining the case management approach
with its emphasis on flexible and individualised assistance. At the same
time, the government tightened the requirements on those receiving
unemployment benefits to actively look for work (known as ‘activity
testing’) and increased the sanctions for failing to do so (Thomas 2007:
10–11).
The rationale for outsourcing employment services was that it would
ensure a greater focus on achieving outcomes for clients at lower cost to
government through:

  1. paying for client outcomes rather than inputs
  2. creating competition between providers for
    a. employment services contracts (through tendering arrangements)
    b. jobseeker clients (who could choose their employment service
    provider).

1 The Department of Employment has experienced a number of machinery-of-government
(and thus name) changes since 1998. For simplicity, it is referred to as the Department of Employment
in this chapter.

278
13. Evaluating Australia’s Employment Services

It was thus intended to focus provider strategies, energies and resources

on achieving outcomes for clients, at the lowest costs possible, and not on

providing activities for clients to do. This was in line with the prevailing

New Public Management (NPM) public administration theory to shift

focus from inputs to outcomes. A declared objective of the reforms was to

obtain better value for money (PC 2002: 3.2).
The Job Network system was managed by the Department of Employment
(for an explanation of its role, see Appendix 13.1). It operated through

the referral by the newly established public benefits agency, Centrelink,
of jobseekers receiving government income support to the contracted

providers, who had flexibility in determining what ‘employment

assistance’ (rather than a conventional labour market program, as under

the previous system) would be appropriate for an individual jobseeker.
Fees paid to providers comprised two components: one fee when

a jobseeker commenced with them and a second when an employment

or other outcome was obtained. Fees were on a sliding scale, with higher

fees set for those who remained in employment for 26 weeks or more.
Fees for both components also varied depending on the level of jobseeker

disadvantage the client faced, as assessed by Centrelink through the Job

Seeker Classification Instrument (JSCI). The higher fees were intended

to offer providers an incentive to make the greater effort required to help

more disadvantaged jobseekers.
While the key principles of the system remained unchanged—such as

having contracted employment service providers and payments for

outcomes—the system itself underwent significant development and

modification between 1998 and 2012. Broadly, there were three main

phases (see Table 13.1):

  1. The Job Network ‘Black-box’ 2 Market (1998–2003): The initial
    development phase, in which contracted providers had significant
    discretion as to what ‘employment assistance’ they provided and
    which focused on outcomes over processes (i.e. ‘black-box’ methods).
  2. The Job Network ‘Regulated Market’ (2003–09) (also called the Active
    Participation Model [APM]): The second phase, in which there were
    increased government regulation and monitoring of providers with

2 This was the term commonly used for this first phase of the Job Network.

279
Value for Money

a prescribed continuum of services for jobseekers, in response to the
discovery that providers were not investing sufficient resources in their
most disadvantaged jobseekers.
3. The Job Services Australia (JSA) ‘Inclusive Market’ (2009–12): The
revamping of the system under the new Labor Government of Kevin
Rudd, which rolled seven schemes into one with four ‘streams’ of
assistance for the unemployed, greater focus on the most disadvantaged
and more transparent provider star ratings.

Improving value for money
The budgetary gains for the government from introducing the Job
Network were evident from the outset: there was an immediate reduction
in the national budget spent on active labour market programs, from
$4.08 billion in 1995–96 to $2.56 billion in 1998–99 (Organisation for
Economic Co-operation and Development [OECD] 2001: 205). There
was an associated decline in gross domestic product (GDP) spending on
active labour market programs, from 0.8 per cent to 0.4 per cent over two
years (OECD 2001: 13).
As well as clear budget savings, there were significant reductions in the
average cost per employment outcome.3 The employment department,
in its evaluation report in 2002, estimated that Job Network costs per
employment outcome were the lowest achieved in the previous decade:
about $5,000–$6,000 since mid-1998, compared with between $10,000
and $16,000 under Labor’s Working Nation programs in the mid-1990s
(DEWR 2002b: 4). This decline in costs per employment outcome had
been produced through both lower unit costs and higher employment
outcomes (Davidson and Whiteford 2012: 108).
The marked change in cost per employment outcome is shown in
Figure 13.1. Over time, moreover, this cost continued to decline
(Figure 13.2). The sustainability of outcomes achieved by jobseekers
was maintained, together with improvements in net impact.4 Surveys

3 ‘Cost per employment outcome’ is the average unit cost of all programs divided by the
proportion of participants in employment three months after leaving the program (Davidson and
Whiteford 2012: 108).
4 Net impact is the measure of the difference that employment services have made to clients’
expected outcomes without assistance. See, for example, DEWR (2003: 98).

280
13. Evaluating Australia’s Employment Services

showed that both employers and jobseekers were happier with the new

arrangements, and the model of provision proved to have sufficient

flexibility to deal with changes to labour market conditions, including

the reductions in unemployment, the emergence of skills shortages up to

2007 and the worsening employment situation with the Global Financial

Crisis (GFC) of 2008. The effectiveness, including cost-effectiveness,
of this model of service delivery has been recognised by the OECD
(2001: 20; 2012: 13), external researchers (Thomas 2007: 15; Davidson

and Whiteford 2012: 57) and through an independent review by the

government’s research and evaluation body, the Productivity Commission,
in 2002.5 Clearly, the government’s objective of improving value for

money was being met.

Job Network introduced

Cost per outcome $A

July 93

July 99

July 01
July 91

July 95

July 97

July 03

July 05

Figure 13.1 Decline in cost per employment outcome, 1991–2006

Source: DEEWR (2007: 138).

5 This report, released in June 2002, contained some criticism of elements of the Job Network

system, but was supportive overall, concluding that the advantages of the new market for employment

services ‘outweigh its limitations’ because ‘it sets out clear objectives, provides stronger incentives for

finding ways of achieving job outcomes and encourages cost efficiency’ (PC 2002: xxvi, xxxiii).

281
282
Table 13.1 Employment services in Australia, 1998–2012
System1 Contract period No. of providers2 Key features Major departmental evaluations3
‘Black-box’ Market: 1998–2000 306 Eligible jobseekers referred by Centrelink to Job Evaluation of Job Network (2000,
Job Network 2000–03 205 Network providers, who had significant discretion 2001 and 2002)
(Howard Coalition and were contracted for results Performance review of JSCI (2000)
Value for Money

Government) Job Network services for jobseekers dependent Net impact of labour market
on assessed needs through Job Seeker programs (1997)
Classification Instrument (JSCI)4 score
Tighter activity test requirements for unemployed
income support recipients
Introduction in 1999 of biannual Star Ratings5
system for performance evaluation—used to
reward higher-performing providers and remove
business from poorer performers
Regulated Market: 2003–06 109 Level of Job Network services dependent on Job Network best practice (2006)
Active Participation 2006–09 103 duration of unemployment as well as JSCI Net impact study of Intensive
Model (APM) score—greater purchaser oversight Assistance and Job Search
(Howard Coalition ‘Intensive Assistance’ provided if jobseeker still Training (2003)
Government) unemployed after 12 months Job Seeker Account evaluation (2006)
‘Job Seeker Account’ to support provider APM evaluation (2007)
investment in disadvantaged jobseekers
Mandatory IT system for information flow
Inclusive Market: July 2009–2012 116 Seven programs integrated into one, with four Independent review of the jobseeker
Job Services ‘streams’ (levels) of assistance based on extent compliance framework (2010)
Australia (JSA) of jobseeker disadvantage and timing and type of Net impact study of labour market
(Rudd Labor services from providers assistance (2010)
Government) Greater flexibility in program assistance Review by expert reference group
JSCI score again significant for type of assistance of Star Ratings (2010)
available
13. Evaluating Australia’s Employment Services

1
Descriptors from Considine and O’Sullivan (2014).
2
Taken from OECD (2012: 76).
3
Conducted by officers in the employment department or commissioned from experts with

departmental support.
4
Tool that assesses how difficult it will be for the jobseeker to find employment.
5
A performance management system developed by the employment department that gives

providers a rating (between one and five stars) based on their comparative performance in

achieving employment or educational outcomes for jobseekers.
Sources: Davidson and Whiteford (2012); OECD (2012); Borland (2014); Considine and

O’Sullivan (2014).

6000

5000

4000

3000
$A

2000

1000

0
Job Network Job Network Job Network JSA
contract 1 contract 2 contract 3 2009–2010
1998–2000 2000–2003 2003–2009

Figure 13.2 Cost per employment outcome

Source: Data from DEEWR (2011).

Evaluation and its role in program design

and management

The cost-per-outcome estimates, together with estimates of net impact

and other analyses, such as identifying which jobseekers were being

successfully assisted and which were less well supported, were obtained

from a comprehensive and sustained evaluation and monitoring program

that began in 1998 and was continued under both Coalition and

Labor governments.

283
Value for Money

The first major evaluation strategy was announced in April 1998. It was
designed to enable the Howard Government ‘to assess how well [the]
Job Network was working and to provide information for later policy
adjustment’ (DEWR 2002b: 1). It was also to provide solid public
evidence on the impact of such a radical and controversial shift in delivery
arrangements.6 Three stages of evaluation were carried out. The first two
reports on the implementation of the Job Network and early indicators of
the impact of assistance were published in 2000 and 2001, while the final
stage, released in 2002, focused on the lessons learnt from evaluating the
Job Network, including its effectiveness in improving the employment
prospects of jobseekers on a sustainable basis (DEWR 2000, 2001, 2002b).
The evaluation strategy also required that the Productivity Commission
review the policy framework for the Job Network.
Each major phase of the Job Network and of JSA had an extensive set of
evaluation products (see Table 13.1). The investment was significant; by
way of example, the evaluation strategy for the JSA in 2009 was costed
at $8.3 million (DEEWR 2009). The employment department managed
all the evaluations in-house. Reports were based on both quantitative and
qualitative analysis, which was conducted by both in-house experts in
data analysis and evaluation and external consultants contracted by the
department to undertake research and a major survey program of jobseekers,
employment providers and employers. The department conducted several
types of evaluations, as the OECD (2012: 228) observed: evaluations
of specific programs, processes or jobseeker outcomes, estimates of the
net impact of programs and broader strategic reviews employing a range
of evidence. The strength of departmental administrative and program
monitoring data was crucial to these evaluations. For example, data on
jobseeker outcomes from employment assistance were collected from a
post-program monitoring survey carried out three months (and sometimes
six months) after assistance, and data on jobseeker characteristics were
collected through the JSCI. Government income support data were also
used. The extensive internal capability was built on an existing foundation
of research and evaluation expertise, which had been enhanced following
the introduction of the Job Network.

6 The driving force inside the Howard Government for these reforms was the Minister for Schools,
Vocational Education and Training, David Kemp, who was known for his strong interest in gathering
an evidence base to support the government’s major reforms (Jarvie and Mercer 2015: 346, 351).

284
13. Evaluating Australia’s Employment Services

The evaluation and monitoring activities were not only extensive; the

findings were also very influential in modifying employment services. Both

the OECD and external researchers have commented that a characteristic

of Australia’s employment services system was the high policy relevance

of its evaluations and monitoring, with the one hesitancy being that the

detailed evaluations and research have been done within the department,
thus detailed evaluative data have not been subjected to external scrutiny
(OECD 2012: 225).
The first three evaluations of the Job Network were particularly influential

in the design of the second phase, known as the Active Participation Model
(APM), which was introduced in 2003 and which responded to several

of the early evaluation findings about the Job Network’s performance
(Table 13.1) (DEWR 2002b: 6; OECD 2012: 6). Through some key

contract changes, the Howard Government accepted that the initial design

of this radically new system had introduced unintended disincentives in

the market to offer sustained services for ‘difficult’ jobseekers. In particular,
the evaluation finding that the most disadvantaged jobseekers often

received limited assistance from their provider underpinned the

introduction of fixed service fees that were weighted towards those

jobseekers who were most difficult to place (Davidson and Whiteford

2012: 58). Additionally, the Job Seeker Account, also introduced in 2003,
established a quarantined funding pool to enable providers to expend

funds on measures to address barriers to jobseekers. Under this new APM,
greater oversight of provider activity was established, with information

on provider contact with jobseekers and assistance provided now being

reported to the employment department through a central information

technology (IT) platform known as EA3000 (Davidson and Whiteford

2012: 58).
The design of the subsequent JSA model, introduced in 2009, was also

considerably influenced by the department’s evaluation findings—
in particular, its net impact studies of labour market assistance. This

included the decision to integrate seven existing programs into one and

to concentrate assistance on the most disadvantaged jobseekers, given

the evidence that the largest net impact from employment providers was

associated with this category of the unemployed (OECD 2012: 224).
The evaluation also showed that giving intensive support to clients for

12–18 months was too long, and this was subsequently cut back to

six months.

285
Value for Money

With the improved access to information following the introduction of
the APM and the EA3000 platform in 2003, the department conducted
seminars and published material on ‘best practices’ in the Job Network
and internal analysis of detailed administrative data on employment
outcomes (DEWR 2006; Davidson and Whiteford 2012: 66).
Given the significance of the government’s investment in employment
services and public scrutiny of this new approach, the employment
department’s evaluation and monitoring activities have been subject
to ongoing external scrutiny, such as in the Productivity Commission’s
independent review in 2002 and in the two major reports by the OECD
published in 2001 and 2012. In response to methodological issues
identified by the Productivity Commission and the OECD in 2001, the
employment department reassessed its approach to measuring the net
employment gains provided by the Job Network (Thomas 2007: 15–16).

The role of star ratings in achieving value for money
As described earlier, improved value for money was undoubtedly
achieved, although large efficiency gains and cost reductions took time to
emerge (Finn, quoted in Borland 2014: 10). What was unexpected was
that some of the mechanisms by which these were achieved were quite
different to the original conception. For example, the original idea to
choose providers on the basis of price tenders was quickly abandoned and
replaced with tenders based on expected quality and outcomes. And one
element, ‘star ratings’ for providers, has proved to be much more powerful
than originally conceived.
Star ratings of providers—where providers were given a rating of between
one and five stars (one star being poor and five stars being the highest
rating)—were developed in 1999 with the assistance of the South Australian
Centre for Economic Studies at Adelaide and Flinders universities. It was
originally designed as a mechanism to signal to jobseekers the relative
effectiveness of local providers. It was thus intended to drive competition
between providers for clients. In practice, it very rapidly became the major
mechanism of rewarding highly performing providers with more business
and contracts and removing relatively poorly performing providers.
Arguably, it became the key driver in achieving value for money in the
employment services program for the past 15 years.

286
13. Evaluating Australia’s Employment Services

What are star ratings? How are they calculated?
Star ratings are measures of provider performance adjusted for differences

in jobseeker characteristics and local labour market conditions. The core

features of the ratings have remained broadly constant since they

were introduced, although the way they are calculated (including the

weightings given to different variables), the distribution and the number

of performance levels7 have varied with different phases of the employment

services market.
The main element that determines the star rating of a provider at a site

has been the short-term (three to six months) employment or educational

outcomes of the jobseekers assisted by that provider at that site. There have

also been efficiency variables, such as the time taken to ‘place’ jobseekers.
For each provider site, the outcomes for jobseekers (disaggregated by their

characteristics and local labour market conditions), together with other

variables, each with a weight, are compared with the national estimate for

all providers via a regression (PC 2002: 11.19). The differences at each

provider site between the outcomes obtained and the expected outcomes

are then allocated a star rating. Overall, the star rating reflects the value

added by a provider compared with other providers.
Initially, under the Job Network, the distribution was fixed, so that, even if

a provider improved their performance in absolute terms, they would receive

an improved star rating only if they improved their performance compared

with other providers. After 2009, following an expert review, ratings were

based on the percentage difference between each site’s performance and the

national average, which reduced the number of providers falling into

the lowest star bands, and was deemed fairer by providers.
The variables, and particularly the weightings given to them, varied

significantly between contracts. Under the first phase of the Job Network
(1999–2003), the two performance indicators for star ratings were:

  1. The average time taken for jobseekers to achieve employment
    placements (which was designed to discourage ‘parking’ and the
    delaying of outcomes until higher outcome payments were available).
  2. The proportions of jobseekers for whom outcome fees were paid
    (which was designed to reinforce the focus on job outcomes).

7 For a period, there were nine levels, with four ‘half ’ stars and five full stars.

287
Value for Money

Under the second phase—the APM (2003–09)—the greatest weight
(attracting 60 per cent of the weightings within the star ratings) was
given to outcomes attracting full outcome payments; generally, this
was employment sufficient to take jobseekers off benefits for at least three
to six months (see Table 13.2).

Table 13.2 Weightings used for the star ratings under the Active
Participation Model, from 2005 (per cent)
Interim ‘full’ Final ‘full’ Intermediate Job placements
outcomes outcomes outcomes1
40 20 20 10
1
Includes a 5 per cent weighting for educational outcomes.
Notes: Final ‘full’ outcomes are employment outcomes at 26 weeks; interim ones are at
13 weeks. Percentages do not add to 100.
Source: Davidson and Whiteford (2012: 66), based on Australian National Audit Office
(ANAO 2005).

There were significant changes for the third phase under the JSA
(2009–12). With the change from Howard’s Liberal–National Coalition
Government to the Rudd Labor Government in 2007, the star ratings
system was revised, following a review by an expert reference group.
The new calculation was much more complex and reflected the new
Labor Government’s focus on helping the most highly disadvantaged.
Jobseekers were allocated to one of four ‘streams’, with one being
relatively advantaged and four the most disadvantaged. For the purposes
of star ratings, the outcomes achieved by the ‘stream four’ jobseekers were
given four times the weight of those in stream one. There was also greater
weighting of 26‑week outcomes compared with 13-week outcomes,
the introduction of a weighting for ‘bonus outcomes’ for employment
obtained after training and a weighting for ‘social outcomes’ for jobseekers
who completed stream four assistance (for details and changes from the
previous system, see Appendix 13.2).

How were star ratings used?
As mentioned, initially, it was expected that the star ratings would be used
by jobseekers to choose their provider. In line with this, from 2000, the
employment department began to regularly publish star ratings of provider
performance at over 1,400 individual sites. However, evaluations and
jobseeker surveys regularly reported that the ratings were not influencing
jobseekers (PC 2002: xxxii; Struyven 2004: 13).

288
13. Evaluating Australia’s Employment Services

The employment department’s 2007 evaluation reported that the regular

release of ratings
coincided with a sustained improvement in the employment outcomes
of jobseekers assisted by the Job Network. This improvement seemed
greater than the level of improvement which could have realistically been
expected from improvement in the labour market. (DEEWR 2007: 141)

It related this to the fact that the star ratings provided Job Network

members with a strong incentive to focus on securing outcomes, job

placements and interim outcomes because these were the primary

performance measures used for the estimation of the ratings. However,
later assessments concluded that their major impact was through their

use in eliminating employment service providers that performed poorly
(OECD 2012: 13). In tender rounds from 2000 onwards, providers with

low star ratings lost business, which was reallocated to higher-performing

providers and to some new entrants to the market.
The first major use of the star ratings for allocation of business occurred in

the 2003 Job Network tender. In this tender round, the ‘top’ 60 per cent

of providers based on star ratings had their contracts rolled over via an
‘invitation to treat’, leaving the bottom 40 per cent to compete with new

entrants to the market (Davidson and Whiteford 2012: 65–6).8 After this

tender, the number of organisations in the network was almost halved
(to 109), with just seven new entrants (Finn 2008).
In 2006, the same process was repeated but a much lower proportion of the

business was put out to tender; only 8 per cent of the (lowest-performing)
providers were required to tender. This was partly to reduce the disruption

that occurs from a major turnover of providers (Finn 2008). In place

of regular and major tender processes, a system of rolling six-monthly

performance reviews was introduced. Providers whose sites within a given

area had consistently low star ratings had their market share reduced,
sometimes to zero, with remaining business allocated by the department

to other local providers or put out to tender.
In the JSA period (2009–12), star ratings continued to be used to

determine future ‘business shares’ among local providers, but reallocations

occurred on an 18-month cycle rather than the previous six-month cycle.

8 There was also a quality indicator that was expected to be used only rarely to adjust provider

business shares.

289
Value for Money

This was in response to the widespread criticism of the six-monthly
cycle from providers on the grounds that it encouraged ‘short-termism’
in service delivery strategies and contributed to instability in the Job
Network, especially a high turnover of staff who could not be guaranteed
employment throughout the three-year tender period (O’Connor 2008,
quoted in Davidson and Whiteford 2012: 65–6).
While the removal of poorer-performing providers is regarded as having
had the greatest impact on the operation of the market, star ratings were
useful in:

  1. driving servicing efficiency in terms of reducing time to achieve
    outcomes for clients
  2. encouraging provider focus on government priorities such as achieving
    outcomes for the most disadvantaged clients
  3. reducing workload for the department associated with new contract
    periods (through rolling over of contracts).
    From early on, this rating system was seen as performing an ‘essential
    function’ in the operation of the market (DEWR 2002a: 1). Both Coalition
    and Labor governments clearly viewed star ratings as a useful tool. Star
    ratings were gradually extended to other providers of employment-related
    services, with the first star ratings of provider performance published
    for Disability Employment Services in July 2006 and for Vocational
    Rehabilitation Services in 2007. Star ratings have also been continued for
    subsequent employment services arrangements under the JSA, 2012–15,
    and the Job Active 2015 model.

    Acceptance of star ratings
    While the introduction of star ratings had an immediate impact on
    effectiveness and cost (see Boxall 2003), it took some time before they
    were fully accepted by the industry. Originally, there was relatively
    little publicly available information on how the ratings were calculated
    and their composition, but, after the expert review in 2009, which
    led to greater transparency and less frequent reallocation of business,
    there was much greater acceptance.9 The Australian National Audit
    Office (ANAO) reported in a 2013–14 audit that ‘[t]he approach to

9 Interview with S. Sinclair, Chief Executive Officer, National Employment Services Association,
September 2015.

290
13. Evaluating Australia’s Employment Services

measuring performance was generally accepted’ by JSA providers and ‘[t]
he Department has consulted with providers, and as a result aspects of

the performance measures have been adjusted over time to improve its

operation’ (2014: 2.43).
There was general acceptance by the providers’ peak body that the

variables used, and the behaviour they reward, have been a key driver

of performance.10 ‘The Star Rating System is defensible, with a sound

mathematical basis, and essentially the best methodology to normalise

each site and contract ESA (Employment Services Area)’ (NESA 2015: 6).
One reason that star ratings and their component performance measures

have driven performance is the confidence these employment providers

and their peak body have had in the integrity of the system, which was

managed by the Department of Employment. While there was always the

danger of fraud (for example, DEEWR 2012), there was confidence in the

data in the system.11 There was also confidence in the integrity of tender

processes and mechanisms to get feedback on provider performance
(for audit and fraud controls, see Box 13.1).

Box 13.1 Audit and fraud controls
• Tendering process: External probity adviser.
• Contract managers in each state. Providers assigned a risk rating, which
determines the level of monitoring.
• IT system: Verifies providers’ claims against social security data.
• Surveys of 400,000 jobseekers annually to gain feedback on their providers.
• Jobseeker complaints process and a ‘tip-off’ line.
• Internal and special audits.
Supplemented by broader controls, including the ANAO, parliamentary inquiries and
the ombudsman.
Source: DEEWR (2012).

Conclusion

The outsourcing of service provision from government to private and

community providers is conceptually simple and attractive to governments

seeking to improve value for money. This example from Australia shows

that improved value for money can be achieved, but it has required

10 ibid.
11 ibid.

291
Value for Money

a complex system of management, including an intense focus on the
performance of providers and the outcomes of the system. It has required
experimentation and an acceptance that some elements have been more
effective than others (Table 13.3).

Table 13.3 Employment services: Design features to drive better
outcomes at lower cost
System features Effective? Comments
Payment for Yes—in focusing While it was effective, it required
outcomes providers on constant fine-tuning and
getting employment supplementation with other mechanisms
outcomes to prevent ‘parking’ of hard-to-help
clients (where ‘parking’ means clients
were given very minimal assistance).
It also required constant monitoring
for fraud.
Targeting jobseekers Generally, yes— Greater fees were paid when outcomes
using the JSCI: An very important for were achieved for jobseekers with
assessment of how targeting support to a high JSCI.
difficult it will be for most disadvantaged Use of the JSCI in determining what
the jobseeker to get services a jobseeker would get and the
a job outcome fees paid changed between
phases/contracts.
Tendering Effective when Tender rounds created major disruption
tendering on quality to services for clients when there
and outcomes was large turnover of providers—
Ineffective when for example, in 2009.
tendering on price
Jobseeker clients able Not effective Jobseekers would tend to use closest
to choose provider provider. Very few exercised choice
based on provider performance.
Star ratings of Very effective in Used by the employment department
providers driving value for to ‘roll over’ the contracts of best-
money over the performing providers, awarding of
period 2000–12 tenders and removal of poor performers.
Needed regular fine-tuning to reflect
changes in labour market conditions,
and constant monitoring for fraud.
Not effective in rating performance of
specialist providers working with very
hard-to-help clients.

Source: Author’s work.

There have been many elements that have contributed to the results
achieved in the privatised employment services system. One element was
the fact that, while it was a radical change, the reform was built on previous
experience with the outsourcing of some employment services and

292
13. Evaluating Australia’s Employment Services

learnings from a long investment in research, evaluation and stakeholder

engagement. Another important contributor was the targeting of highly

disadvantaged jobseekers through the JSCI tool. The third element was its

outcomes focus—its clear performance framework, payment for outcomes

and, in particular, the use of provider star ratings in contract renewal and

reallocation of business.
Underpinning all of these were the sustained and extensive public

monitoring and evaluation, which provided the star ratings and other

measures of provider and system outcomes, to enable regular fine-tuning

of the system. In addition, it has required a core group of public officials

with analytical and management capacity and who were trusted by

providers; a strong audit and fraud system; and management based on a

clear focus on the evidence of ‘what works’ and what needs to change and

preparedness to modify the system in line with that evidence.

References

Australian National Audit Office (ANAO). 2005. Implementation of Job
Network employment services contract 3, Department of Employment
and Workplace Relations; Centrelink. Audit Report No. 6 2005–06.
Canberra: ANAO.
Australian National Audit Office (ANAO). 2014. Management of services
delivered by Job Services Australia, Department of Employment. Audit
Report No. 37 2013–14. Canberra: ANAO.
Borland, J. 2014. ‘Dealing with unemployment: What should be the role
of labour market programs?’ Evidence Base 4. Melbourne: Australia
and New Zealand School of Government. Available from: journal.
anzsog.edu.au (accessed 17 July 2017).
Boxall, P. 2003. Measuring performance: The state of the art. Presentation
to the Australia and New Zealand School of Government.
Considine, M. and S. O’Sullivan. 2014. ‘Introduction: Markets
and the new welfare—Buying and selling the poor’. Social Policy and
Administration 48(2)(April): 119–26. doi.org/10.1111/spol.12052.

293
Value for Money

Davidson, P. and Whiteford, P. 2012. An overview of Australia’s system of
income and employment assistance for the unemployed. OECD Social,
Employment and Migration Working Papers No. 129. Paris: OECD
Publishing. doi.org/10.1787/5k8zk8q40lbw-en.
Department of Education, Employment and Workplace Relations
(DEEWR). 2007. Active Participation Model Evaluation. July 2003–
2006. November. Canberra: Australian Government.
Department of Education, Employment and Workplace Relations
(DEEWR). 2009. Evaluation Strategy for Job Services Australia 2009
to 2012. Canberra: Australian Government.
Department of Education, Employment and Workplace Relations
(DEEWR). 2011. Taskforce on Strengthening Government Service
Delivery for Job Seekers. Report to the Secretary of the Department
of Education, Employment and Workplace Relations and the Secretary of
the Department of Human Services. 30 March. Canberra: Australian
Government.
Department of Education, Employment and Workplace Relations
(DEEWR). 2012. Job Services Australia provider brokered outcomes.
Audit Report. Canberra: Australian Government.
Department of Employment and Workplace Relations (DEWR).
2000. Job Network evaluation stage one: Implementation and market
development. Evaluation and Program Performance Branch Labour
Market Policy Group EPPB Report 1/2000. Canberra: Australian
Government.
Department of Employment and Workplace Relations (DEWR). 2001.
Job Network evaluation stage two: Progress report. Evaluation and
Program Performance Branch Labour Market Policy Group EPPB
Report 2/2001. Canberra: Australian Government.
Department of Employment and Workplace Relations (DEWR). 2002a.
Government Response to the Productivity Commission Independent
Review of Job Network. Canberra: Australian Government.
Department of Employment and Workplace Relations (DEWR). 2002b.
Job Network evaluation stage 3: Effectiveness report. Report 1/2002.
Canberra: Australian Government.

294
13. Evaluating Australia’s Employment Services

Department of Employment and Workplace Relations (DEWR).
2003. Intensive assistance and job search training: A net impact study.
Evaluation and Program Performance Branch, Employment Analysis
and Evaluation Group December EPPB Report 2/2003. Canberra:
Australian Government.
Department of Employment and Workplace Relations (DEWR). 2006.
Job Network Best Practice. Canberra: Australian Government.
Finn, D. 2008. The British ‘Welfare Market’: Lessons from contracting out
welfare to work programmes in Australia and the Netherlands. York,
UK: Joseph Rowntree Foundation. Available from: www.jrf.org.uk
(accessed 17 July 2017).
Jarvie, W. and T. Mercer. 2015. ‘Championing change in a highly contested
policy area: The literacy reforms of David Kemp, 1996–2001’.
In J. Wanna, E. A. Lindquist and P. Marshall (eds) New Accountabilities,
New Challenges. Canberra: ANU Press.
National Employment Services Association (NESA). 2015. Employment
Services Australia: Roadmap for the future (detailed proposals). Melbourne:
NESA. Available from: nesa.com.au (accessed 17 July 2017).
O’Connor, B. 2008. The Future of Employment Services in Australia.
Canberra: Department of Education, Employment and Workplace
Relations.
Organisation for Economic Co-operation and Development (OECD).
2001. Innovations in Labour Market Policies: The Australian way. Paris:
OECD Publishing.
Organisation for Economic Co-operation and Development (OECD).
2012. Activating Jobseekers: How Australia does it. Paris: OECD
Publishing.
Productivity Commission (PC). 2002. Independent review of the Job
Network inquiry report. Report No. 21, June. Melbourne: Productivity
Commission.
Struyven, L. 2004. Design choices in market competition for employment
services for the long-term unemployed. OECD Social, Employment and
Migration Working Papers No. 21. Paris: OECD Publishing.

295
Value for Money

Thomas, M. 2007. A review of developments in the Job Network. Research
Paper No. 12, 2007–08, 24 December. Canberra: Department of
Parliamentary Services, Parliament of Australia.

Appendix 13.1 The role of the Department
of Employment
The Department of Employment (with different titles since 1998)
administers the employment services market by:

  1. defining purchaser provider arrangements and detailed in-service
    contracts with private and community-based providers
  2. organising public tenders and the award of contracts
  3. monitoring and supervision of the contract implementation.
    In 2012, the department oversaw contracts with more than 100 private
    and community-sector providers. It paid providers fees for contracted
    services and placement outcomes, supervised contract implementation at
    the level of the department and through its state, territory and district
    offices and monitored provider performance at the level of about 2,300
    individual sites through star ratings assessments and other performance
    indicators (OECD 2012: 63, 75).

Appendix 13.2 Star ratings for the JSA,
2009–12
Appendix Table A13.1 Weightings used for JSA star ratings (per cent)
Stream 1 Stream 2 Stream 3 Stream 4
(overall (overall (overall (overall
weighting weighting weighting weighting
of 10%) of 20%) of 30%) of 40%)
KPI1: ‘Speed to place’ 18 7 5 2

KPI2: Interim ‘full’ 10 23 25 19
outcomes
KPI2: Final ‘full’ 10 30 30 21
outcomes

296
13. Evaluating Australia’s Employment Services

Stream 1 Stream 2 Stream 3 Stream 4
(overall (overall (overall (overall
weighting weighting weighting weighting
of 10%) of 20%) of 30%) of 40%)
KPI2: Intermediate 10 20 20 18
outcomes
KPI2: Paid placements 42 10 10 10
KPI2: Completion n.a. n.a. n.a. 20
of Stream 4

KPI2: ‘Bonus’ 10 10 10 10
outcomes
Total 100 100 100 100

Notes: ‘Speed to place’ refers to the time taken to achieve outcomes; ‘interim and final

full outcomes’ refers to employment outcomes sufficient to remove entitlements to income

support or participation in an educational program that is sustained for 13 and 26 weeks,
respectively; ‘intermediate outcomes’ refers to part-time employment or a less substantial

educational program; ‘paid placements’ refers to employment that is sustained for at least

50 hours; ‘bonus outcomes’ refers to employment outcomes attained within 12 months of

completion of a qualifying training program or outcomes attained by Indigenous people.
Source: Davidson and Whitehead (2012: 80).

Changes in the star ratings framework compared with the Job Network

framework include:
• greater complexity, with 36 weights (previously only seven weights)
• higher weighting on outcomes attained by the most disadvantaged
jobseekers: 40 per cent for those in stream four, compared with
10 per cent for those in stream one (previously, outcomes achieved
after one year of unemployment or three years had the same weight)
• higher weighting on 26-week outcomes compared with 13-week
outcomes (previously, 40 per cent on 13-week outcomes and
20 per cent on 26-week outcomes)
• 10 per cent weight on ‘bonus outcomes’, which include training/
apprenticeship outcomes (previously, there was a 10 per cent weight
on the disadvantaged jobseeker share in the 13-week outcomes)
• weight on ‘social outcomes’ for jobseekers who complete stream four
assistance (previously, ‘social outcomes’ were paid for completion
of two years in the personal support program).

297
This text is taken from Value for Money: Budget and financial

management reform in the People’s Republic of China, Taiwan and
Australia, edited by Andrew Podger, Tsai-tsu Su, John Wanna,
Hon S. Chan and Meili Niu, published 2018 by ANU Press,
The Australian National University, Canberra, Australia.
dx.doi.org/10.22459/VM.01.2018.13