Reliability index is an attempt to quantitatively assess the reliability of a system using a single numerical value.[1] The set of reliability indices varies depending on the field of engineering, multiple different indices may be used to characterize a single system. In the simple case of an object that cannot be used or repaired once it fails, a useful index is the mean time to failure[2] representing an expectation of the object's service lifetime. Another cross-disciplinary index is forced outage rate (FOR), a probability that a particular type of a device is out of order. Reliability indices are extensively used in the modern electricity regulation.[3]
Power distribution networks
For power distribution networks there exists a "bewildering range of reliability indices" that quantify either the duration or the frequency of the power interruptions, some trying to combine both in a single number, a "nearly impossible task".[4] Popular indices are typically customer-oriented,[5] some come in pairs, where the "System" (S) in the name indicates an average across all customers and "Customer" (C) indicates an average across only the affected customers (the ones who had at least one interruption).[6] All indices are computed over a defined period, usually a year:
Momentary Average Interruption Frequency Index (MAIFI) represents an average number of "momentary" (short, usually defined as less than 1 minute or less than 5 minutes) per customer. If MAIFI is specified, momentary interruptions are usually excluded from SAIFI, so from the customer's point of view, the total number of interruptions will be SAIFI+MAIFI;[8]
Average Service Availability Index (ASAI) is a ratio of total hours the customers were actually served to the number of hours they had requested the service.
History
Electric utilities came into existence in the late 19th century and since their inception had to respond to problems in their distribution systems. Primitive means were used at first: the utility operator would get phone calls from the customers that lost power, put pins into a wall map at their locations and would try to guess the fault location based on the clustering of the pins. The accounting for the outages was purely internal, and for years there was no attempt to standardize it (in the US, until mid-1940s). In 1947, a joint study by the Edison Electric Institute and IEEE (at the time still AIEE) included a section on fault rates for the overhead distribution lines, results were summarized by Westinghouse Electric in 1959 in the detailed Electric Utility Engineering Reference Book: Distribution Systems.[3]
In the US, the interest in reliability assessments of generation, transmission, substations, and distribution picked up after the Northeast blackout of 1965. A work by Capra et al.[9] in 1969 suggested designing systems to standardized levels of reliability and suggested a metric similar to the modern SAIFI.[3] SAIFI, SAIDI, CAIDI, ASIFI, and AIDI came to widespread use in the 1970s and were originally computed based on the data from the paper outage tickets, the computerized outage management systems (OMS) were used primarily to replace the "pushpin" method of tracking outages. IEEE started an effort for standardization of the indices through its Power Engineering Society. The working group, operating under different names (Working Group on Performance Records for Optimizing System Design, Working Group on Distribution Reliability, Distribution Reliability Working Group, standards IEEE P1366, IEEE P1782), came up with reports that defined most of the modern indices in use.[10] Notably, SAIDI, SAIFI, CAIDI, CAIFI, ASAI, and ALII were defined in a Guide For Reliability Measurement and Data Collection (1971).[11][12] In 1981 the electrical utilities had funded an effort to develop a computer program to predict the reliability indices at Electric Power Research Institute (EPRI itself was created as a response to the outage of 1965). In mid-1980, the electric utilities underwent workforce reductions, state regulatory bodies became concerned that the reliability can suffer as a result and started to request annual reliability reports.[10] With personal computers becoming ubiquitous in 1990s, the OMS became cheaper and almost all utilities installed them.[13] By 1998 64% of the utility companies were required by the state regulators to report the reliability (although only 18% included the momentary events into the calculations).[14]
Generation systems
For the electricity generation systems the indices typically reflect the balance between the system's ability to generate the electricity ("capacity") and its consumption ("demand") and are sometimes referred to as adequacy indices;[15][16] as NERC distinguishes adequacy (will there be enough capacity?) and security (will it work when disturbed?) aspects of reliability.[17] It is assumed that if the cases of demand exceeding the generation capacity are sufficiently rare and short, the distribution network will be able to avoid a power outage by either obtaining energy via an external interconnection or by "shedding" part of the electrical load.[citation needed] It is further assumed that the distribution system is ideal and capable of distributing the load in any generation configuration.[18] The reliability indices for the electricity generation are mostly statistics-based (probabilistic), but some of them reflect the rule-of-thumb spare capacity margins (and are called deterministic). The deterministic indices include:
the installed reserve margin (RM, a percentage of generating capacity exceeding the maximum anticipated load) was traditionally used by the utilities, with values in the US reaching 20%-25% until the economic pressures of 1970s;[19]
the largest unit (LU) index is based on the idea that the spare capacity needs to be related to the capacity of the largest generator in the system,[20] that can be taken out by a single fault;
for the systems with significant role of the hydropower, the margin shall also be related to a power shortages in the "dry year" (a predefined condition of low water supply, usually a year or sequence of years.[20]
loss of load probability (LOLP) reflects the probability of the demand exceeding the capacity in a given interval of time (for example, a year) before any emergency measures are taken. It is defined as a percentage of time during which the load on the system exceeds its capacity;
loss of load expectation (LOLE) is the total duration of the expected loss of load events in days, LOLH is its equivalent in hours;[22]
expected unserved energy (EUE) is an amount of the additional energy that would be required to fully satisfy the demand within some period (usually a year). Also known as "expected energy not served" (or not supplied, EENS),[23] also known as loss of energy expectation, LOEE;[24]
loss of load events (LOLEV) is a number of situations in which the demand exceeded the capacity;
Ibanez and Milligan postulate that the reliability metrics for generation in practice are linearly related. In particular, the capacity credit values calculated based on any of the factors were found to be "rather close". [25]
^"Guide For Reliability Measurement and Data Collection," Report of the Reliability Task Force to the Transmission and Distribution Committee of the Edison Electric Institute, October 1971.
Billinton, Roy; Li, Wenyuan (30 November 1994). "Adequacy Indices". Reliability Assessment of Electric Power Systems Using Monte Carlo Methods. Springer Science & Business Media. pp. 22–29. ISBN978-0-306-44781-5. OCLC1012458483.