Decide Now or Wait for the Next Forecast? Testing a Decision Framework Using Real Forecasts and Observations

: Users of meteorological forecasts are often faced with the question of whether to make a decision now, on the basis of the current forecast, or to wait for the next and, it is hoped, more accurate forecast before making the decision. Following previous authors, we analyze this question as an extension of the well-known cost–loss model. Within this ex- tended cost–loss model, the question of whether to decide now or to wait depends on two speciﬁc aspects of the forecast, both of which involve probabilities of probabilities. For the special case of weather and climate forecasts in the form of normal distributions, we derive a simple simulation algorithm, and equivalent analytical expressions, for calculating these two probabilities. We apply the algorithm to forecasts of temperature and ﬁnd that the algorithm leads to better decisions in most cases relative to three simpler alternative decision-making schemes, in both a simulated context and when we use reforecasts, surface observations, and rigorous out-of-sample validation of the decisions. To the best of our knowledge, this is the ﬁrst time that a dynamic multistage decision algorithm has been demonstrated to work using real weather observations. Our results have implications for the additional kinds of information that forecasters of weather and climate could produce to facilitate good decision-making on the basis of their forecasts. works by testing it using forecasts and real temperature measurements.


Introduction
The use of forecast probabilities to make decisions has been studied extensively in the atmospheric sciences. Many studies have investigated decision-making using the single-stage costloss model (e.g., Murphy 1969;Kernan 1975;Katz and Murphy 1997;Buizza 2001;Richardson 2001;Roulin 2007). There have also been a number of generalizations of the single-stage costloss model (Murphy 1985;Murphy et al. 1985;Epstein and Murphy 1987;Murphy and Ye 1990;Wilks 1991;Wilks et al. 1993;Katz 1993;Wilks and Wolfe 1998;Regnier and Harr 2006;Roulin 2007;McLay 2011;Tena and Gomez 2011;Matte et al. 2017). A review is given in Wilks (2014). The single-stage cost-loss model is used as an idealized model for making types of decisions that are analogous in terms of logical structure to the binary decision of whether to cancel an event based on a forecast. In this simple model there are just two possible weather outcomes (which we will refer to as good or bad weather), with predicted probabilities, and a single binary decision that needs to be taken based on those probabilities (which we will refer to as cancel or go ahead). The four combinations resulting from the two weather outcomes and the two possible choices lead to different levels of benefit or harm, measured in the model using the concept of utility. Choosing the decision that maximizes the expected utility leads to the conclusion that the event organizer should cancel if the predicted probability of bad weather is above a certain threshold, where the threshold depends in a simple way on the parameters that define the utilities of the different outcomes.
However, knowing probabilities of future weather and climate outcomes may not always be enough information to make rational decisions, and additional temporal aspects of forecasts and decision-making may come into play. This has been demonstrated in a number of studies that consider generalizations of the single-stage cost-loss model to multistage decisions (in which a series of interrelated decisions are made over time) such as those described in Murphy et al. (1985), Epstein and Murphy (1987), Murphy and Ye (1990), Wilks (1991), Wilks et al. (1993), Wilks and Wolfe (1998), Regnier and Harr (2006), Roulin (2007), and McLay (2011). Multistage decisions in which the decisions interact over time are known as dynamic decisions and the analysis can be embedded in the general framework of dynamic programming, which was introduced by Richard Bellman in the 1950s (Bellman 1957). Dynamic programming is used in many fields other than weather and climate; for example, economists have used it to study whether investors should buy a stock today or wait to see how the stock price evolves as new information becomes available. Dynamic programming is often considered as a part of the larger field of operations research (Ventura 2019).
These studies lay out a body of theory for multistage dynamic decision-making, for various different decision situations, and test the theory in various ways. However, perhaps with the exception of the weather roulette game described in Hagedorn and Smith (2009) and Terrado et al. (2019), there have apparently been no studies on decision-making that have attempted to validate decisions with real weather observations. As a result, it is not possible to assess the extent to which the decision-making algorithms that these studies describe would work better than alternative simpler decision algorithms in a real situation, given the extra uncertainties that real forecasts and real observations introduce.
In this article, we reconsider the two-stage dynamic decisionmaking problem discussed by Murphy and Ye (1990) and Regnier and Harr (2006): should we decide now, or should we wait for the next forecast? The technical approach we take to answering this question is closely related to the methods described in the studies cited above, with some small differences. For instance, our model follows Murphy and Ye (1990), Regnier and Harr (2006), and McLay (2011) and differs from those of Murphy et al. (1985) and Epstein and Murphy (1987) and the example in Wilks (1991) in that it allows the costs in the cost-loss model to vary in time. Variations in costs are often an essential factor in deciding whether to wait for the next forecast. Our model differs from Regnier and Harr (2006) and McLay (2011) in that it involves temperature. The most important novel aspects of our analysis are that (i) we consider normally distributed forecasts, and for this special case we are able to derive an algorithm for making decisions that is much simpler than the Markov chain modeling of transition probabilities described in Wilks (1991), Regnier andHarr (2006), andMcLay (2011). The algorithm requires nothing more than a knowledge of the root mean squared error performance of the forecast system, and (ii) we apply our decision algorithm to real forecast data and validate the decisions it generates against real weather observations, in a rigorous out-of-sample way. By doing so, we address the crucial question of whether the algorithm for decision-making that we describe could really be beneficial if applied in practice.
We now illustrate some of the aspects of the decide now or wait for the next forecast decision, using a simple example. An event is planned for Saturday. If the weather conditions at the start of the event are unsuitable then the event will have to be canceled, leading to various expenses, known as the ''loss'' in the cost-loss framework. Daily weather forecasts are available in the run-up to the event and are used by the event organizer to decide whether to cancel in advance. Canceling on Thursday leads to only small cancellation charges, while canceling on Friday leads to larger charges. Both sets of cancellation charges are lower than the potential loss due to last-minute cancellation on Saturday, and this leads to a nuanced set of decisions around whether to cancel on Thursday, Friday, or not at all. On Thursday, the organizer needs to decide whether to cancel (and take advantage of the lower cancellation charges) or wait for Friday's presumably more skillful forecast. If they wait, then on Friday they need to decide whether to cancel (and suffer higher cancellation charges) or go ahead and take the risk of the loss if the weather is bad.
In this example it is clear that rational decision-making requires not only an estimate of the probabilities of future outcomes, but also an understanding of how those estimates, and their skill, might change with subsequent forecasts. Similar examples can be constructed that relate to seasonal forecasts (e.g., a farmer having to decide whether to plant now or later) and climate forecasts (e.g., a government having to decide whether to build a flood defense now or later). Our example is idealized, and one could imagine factors that complicate the real-world decision-making situation. For instance, in reality, there may be forecasts available at greater frequency than daily, that allow a further option of cancellation late on Friday or early on Saturday, or the organizer may have the option to take out weather insurance to mitigate the loss if it occurs. In fact, for real-world decisions, it is seldom possible to write down every factor that influences the decision, let alone code them all into a mathematical framework, and practically all actual decisions are ultimately made using a subjective evaluation based on multiple inputs. As a result, this example should not be taken too literally. It nevertheless illustrates that the decide now or wait for the next forecast dilemma is an essential part of many decision-making situations related to weather and climate.
In section 2 we describe the single-stage cost-loss model in more detail, in the context of the example given above (that of an event organized for Saturday), within which the decision to make is whether to cancel the event. We then extend the model, following Murphy and Ye (1990) and Regnier and Harr (2006), and others, so that it can be used to address the decide now or wait question. General decision models, applying to many steps of forecast, with many possible actions and many possible weather outcomes, can become extremely complex, and the complexity may obscure the intuition behind the model and the solutions that it produces, potentially resulting in little useful insight and models that may never be used in practice. We will try to avoid this by making the minimum possible set of changes to the cost-loss model that allow us to answer the decide now or wait question, leading to a model that is simple and transparent enough that it can be readily understood. We will derive the model from basic considerations, so that it can be understood without first having to study the dynamic programming framework, and so that the derivation is intuitive for atmospheric scientists. This extended cost-loss model is then used to explore how to make a rational decision as to whether to decide based on the first forecast or wait for the second forecast, and to determine exactly what information is required for that decision.
In section 3, we consider the case in which the forecasts consist of normal distributions and are well calibrated, which allows certain simplifications in the modeling and leads to a straightforward implementation algorithm with which we can make the decide-now-or-wait decision. In section 4 we test the implementation algorithm from section 3 using a long series of synthetic weather forecast data. The synthetic data are created in such a way as to capture the relevant statistical structure of real temperature forecasts. In section 5 we perform similar tests on real forecast data and real observations using a rigorous out-of-sample method for testing the decisions. This is a much more challenging test, since the real forecasts and real observations undoubtedly do not perfectly fit the statistical assumptions on which the model is based. In section 6 we summarize the results and discuss the implications for weather and climate forecasting.

Cost-loss modeling
a. The single-stage cost-loss model We now explain the single-stage cost-loss model so that we can introduce the concepts needed to extend it in section 2b below. The single-stage cost-loss model as used in atmospheric sciences assumes that a probabilistic forecast is available that gives the probability of two possible weather outcomes: p for bad weather and 1 2 p for good weather. The forecast probabilities are assumed to be well calibrated (i.e., we assume they have been adjusted based on what can be learnt from past performance of the forecast system) and so can be taken as the best estimate probabilities we have, and do not require further adjustment.
To analyze the model, one can consider the different possible outcomes as a function of the choices that could be made by the event organizer. Each outcome has a probability, based on the forecast, and a utility, based on the definition of the problem. The probabilities and the utilities can be combined to calculate the expected utility for each of the organizer's possible choices, and the assumption in the model is that the organizer will opt for the choice with the higher expected utility. The utilities for each outcome are given in Table 1 and discussed below.
To apply the expected utility framework, first, we consider the choice in which the organizer goes ahead with the event. In this case there are two possible outcomes, depending on the weather, which are given different utilities in the model: good weather (probability 1 2 p) leads to no cost and no loss, and so is given a utility of zero, while bad weather (probability p) leads to a loss, and so is given a utility of 2L, where L is positive. The expected utility of going ahead with the event (E go_ahead ) is the sum of each probability multiplied by the corresponding utility, giving E go_ahead 5 (1 2 p)(0) 1 ( p)(2L) 5 2pL.
Now we consider the choice in which the organizer cancels the event. In this case there are again two possible weather outcomes but this time both are given the same utility of 2C, the cost of cancellation, where C is positive. The expected utility for cancellation (E cancel If the organizer seeks to maximize their expected utility, then the decision to cancel would be taken if the expected utility of canceling is greater than the expected utility of going ahead, E cancel . E go_ahead , which gives 2C . 2pL. Rearranging this expression leads to p . C/L. The conclusion is that, for the organizer to maximize their expected utility, they should cancel if the probability of bad weather is greater than a critical probability given by p crit 5 C/L. This expression is the reason the model is referred to as the cost-loss model.
If C is greater than L, then p crit is greater than 1, and the event will never be canceled because cancellation always has a lower utility than bad weather on the day. The interesting cases arise when C , L and there is a trade-off between canceling and incurring the cost of cancellation, on the one hand, and not canceling and incurring the risk of bad weather and associated loss, on the other.
Models similar to the atmospheric sciences cost-loss model have been studied in other fields. For instance, economists have studied the question of whether to buy a stock today, given estimates of the probability of the stock price increasing or decreasing to certain levels tomorrow, and extensions of that question to multiple stages. See, for example, the discussion starting on p. 80 of Chambers and Lambert (2015), or the discussion starting on p. 95 of the textbook by Dixit and Pindyck (1994).

b. Extending the cost-loss model
We can extend the single-stage weather and climate cost-loss model, while staying within the expected utility framework, as follows. To make the explanation as readily understood as possible we will continue to use our illustrative example based on an event organized for Saturday. We now assume that two (lagged) weather forecasts are available for Saturday, one on Thursday and one on Friday. The utilities for each outcome are given in Table 2 and discussed below. The decision framework we now derive applies equally well to other types of forecast and other time periods, such as weather forecasts from Monday and Friday for Saturday, or climate forecasts for 2050 produced in 2030 and 2040.
On Friday, the organizer faces the same decision as is described in the single-stage cost-loss model: whether to cancel Saturday's event. We will now write the utility of cancellation on Friday as 2C 1 , where the subscript 1 indicates cancellation 1 day in advance of the event, or, more generally, 1 forecast step in advance. The critical probability then becomes p crit 5 C 1 /L, and the organizer should cancel the event on Friday if the probability of bad weather exceeds p crit as before. We now, additionally, move backward one step in time and imagine the organizer considering a weather forecast on Thursday, at which point they have the choice to either cancel there and then, or wait for Friday's forecast. This is the decision that we will now analyze in detail. Canceling on Thursday leads to a cancellation utility of 2C 2 and the interesting cases arise in this problem when cancellation on Thursday is cheaper than cancellation on Friday, which is in turn cheaper than lastminute cancellation on Saturday (C 2 , C 1 , L). Cancellation on Thursday being cheaper than cancellation on Friday (C 2 , C 1 ) leads to a dilemma for the organizer, particularly when the weather forecast on Thursday (for Saturday) is looking bad. There is now a trade-off for them between either canceling on Thursday and benefitting from Thursday's cheaper cancellation fee or waiting for Friday to make a more informed decision.
From a mathematical point of view, the decision on Thursday is complex because it may be followed by, and needs to take account of the possibility of, having to make another decision on Friday, and that second decision would be based on information (Friday's forecast for Saturday) that is not available on Thursday. To help make Thursday's decision we must analyze how much we do already know on Thursday about what Friday's forecast might be.
To analyze the trade-off involved in Thursday's decision using expected utility, we first define four probabilities, p 1 , p 2 , p 0 , andp. In the single-stage cost-loss model described above p is used to represent the forecast probability of bad weather on Saturday, as evaluated on Friday. In the extended cost-loss model, we will now write the same probability as p 1 to indicate a 1-day forecast. We will also define the forecast probability of bad weather on Saturday, as evaluated on Thursday, as p 2 .
From the point of view of Thursday p 1 is now not a single probability but is a random variable and has a range of possible probability values that are all the possible values that Friday's forecast might take, given what we know on Thursday. For instance, if the forecast for Saturday, created on Thursday, is already saying that bad weather is very likely then p 2 will be known, and high, and we would already be able to predict that p 1 will most likely have high values, even though we would not know exactly the value it would take until Friday.
In this sense one could imagine creating a probabilistic forecast on Thursday for the range of values that p 1 might have on Friday, and indeed the simulation algorithm described in section 3 below, and applied in sections 4 and 5, involves making just such a probabilistic forecast of future forecasts. This probabilistic forecast captures how we think the probability of bad weather on Saturday will change from what we are predicting on Thursday, to what we might predict on Friday. From this probabilistic forecast for p 1 , we will then evaluate the probability that p 1 will exceed the critical value p crit , and we will call this new probability p 0 . Since exceeding the critical value leads to cancellation of the event on Friday, p 0 is the probability that we would cancel the event on Friday, as assessed on Thursday.
In the single-stage cost-loss model, if the event organizer chooses to go ahead, because p , p crit , then there is still the chance that the weather will turn out bad during the event. This happens with probability p in that model. In the extended costloss model, we will again need to consider the chance that the organizer goes ahead but the weather turns out bad during the event, but we now need to evaluate it on Thursday so that it can form part of the basis for the decision to be made on Thursday. We will call this probabilityp. From Thursday's point of view, going ahead, yet having bad weather, can arise from a range of values of p 1 . For instance, we can imagine one case (on Friday) in which p 1 turns out only just below the threshold p crit . In this case, the organizer would go ahead, but bad weather on Saturday is not that unlikely, since p 1 is still fairly high. On the other hand, we can imagine another case in which p 1 turns out far below the threshold p crit , in which case bad weather on Saturday is more unlikely.p is the mean of the probability of bad weather, conditional on going ahead, over all such cases i.e., for different levels of p 1 in the range [0, p crit ). In summary,p is the probability, evaluated on Thursday, that if on Friday p 1 does not exceed p crit , the weather on Saturday will nevertheless be bad. Table 3 summarizes the definitions of p 1 , p 2 , p 0 andp for reference. The meanings of p 0 andp will become clearer in the context of the normal distribution example, discussed in section 3 below.

c. Expected utility analysis
Given the definitions of p 1 , p 2 , p 0 ,andp, we can now derive an expression for the expected utility of the two possible choices in the extended cost-loss model. The decision to be analyzed in this case is the decision taken on Thursday to cancel or wait for Friday's forecast. The logic of the derivation is illustrated in Fig. 1.
First, we consider the choice of canceling on Thursday. This leads to a 100% chance of a utility of 2C 2 , and hence an expected utility of cancellation on Thursday (E cancel_Thursday ) of E cancel_Thursday 52C 2 . The probability, evaluated on Friday, of bad weather on Saturday; when considered from the point of view of Friday, p 1 takes a single value; when considered from the point of view of Thursday, p 1 has a distribution of possible values p 2 The probability, evaluated on Thursday, of bad weather on Saturday p 0 The probability, evaluated on Thursday, that on Friday p 1 will exceed p crit p The probability, evaluated on Thursday, that, if on Friday p 1 does not exceed p crit , the weather on Saturday will nevertheless be bad Second, we consider the choice of waiting for Friday's forecast. Having waited to Friday, there are two outcomes: cancel on Friday, or decide to go ahead. These occur with different probabilities, which must be evaluated from the point of view of Thursday in order to feed into Thursday's decision. The first of these outcomes, canceling on Friday, occurs if p 1 . p crit , and incurs a utility of 2C 1 . From Thursday's point of view the probability of p 1 . p crit occurring is p 0 (by the definition of p 0 given above), and so the contribution of canceling on Friday to the expected utility for waiting on Thursday is 2p 0 C 1 .
The second of these outcomes on Friday, deciding to go ahead, is more complicated since the utility is then affected by the weather outcome. Deciding to go ahead on Friday will only occur if p 1 , p crit , which occurs with probability 1 2 p 0 . If the weather is good, the utility outcome is then zero, and the contribution to the expected utility is zero. If the weather is bad, which occurs with conditional probabilityp (by definition ofp given above) then the utility outcome is 2L. The contribution to the expected utility of waiting on Thursday from going ahead on Friday is therefore 2(1 2 p 0 )pL. The probabilities in this expression can also be understood using the definition of conditional probability, which states that p(a AND b) 5 p(a)p(bja) and which we can apply here to say that the probability of going ahead AND having bad weather is equal to the probability of going ahead (1 2 p 0 ) multiplied by the probability of having bad weather, given that we have gone ahead (p).
From the above considerations, the overall expected utility of waiting on Thursday is made up of three contributions from the three possible outcomes that waiting on Thursday may lead to. These are: canceling on Friday (2p 0 C 1 ), going ahead and having good weather (0) and going ahead and having bad weather [2(1 2 p 0 )pL]. Combining the three contributions to the expected utility of waiting (and noting that one of them is zero) gives a total expected utility of waiting (E waiting ) of E waiting 5 2p 0 C 1 2 (1 2 p 0 )pL.
We have derived expressions for the expected utility for both of the choices that present themselves on Thursday. We can now proceed to the final step in the analysis, which is to compare the expected utilities of the two choices. If the organizer seeks to maximize their expected utility, then the decision on Thursday to cancel would be taken if the expected utility of canceling is greater than the expected utility of waiting, in which case E cancel_Thursday . E waiting , implying (1) If we fully understand our forecasting system, the forecast skill, and how forecasts can change in time, then we can calculate p 0 andp, since they are just properties of the forecast. This inequality then determines whether to cancel, as a function of L, C 1 and the new parameter C 2 . If we decide to wait, then come Friday the complexity of the decision on Thursday can be forgotten, and the decision on Friday can be made using the single-stage cost-loss model. In summary, we have derived an expression that solves the extended cost-loss problem of whether to cancel on Thursday or wait for another forecast on Friday. It depends on the FIG. 1. The derivation of the expected utilities for the decision as to whether to cancel or wait on Thursday. ''Prob. j cancel'' means ''probability given the decision to cancel,'' and ''Prob. j wait'' means ''probability given the decision to wait.'' ''Cont. to exp. utility'' means ''contribution to expected utility.'' The probabilities in the ''Saturday's weather'' column are conditional probabilities, evaluated on Thursday, conditional on Friday's forecast turning out well enough that the event is not canceled on Friday. The decision to cancel leads to only one outcome in the final column, which therefore has a probability of 1. The decision to wait leads to three possible outcomes in the final column, with different probabilities, different utilities, and different contributions to the expected utility. The expected utility for the decision to wait is the sum of these three contributions.
calculation of two forecast quantities that are extensions of what is normally included in a probabilistic forecast. The first is p 0 , the probability (evaluated on Thursday) that the probability (evaluated on Friday) of bad weather (on Saturday) exceeds a critical threshold. The second isp, the conditional probability (evaluated on Thursday) of bad weather (on Saturday), given that the probability (on Friday) of bad weather (on Saturday) does not exceed the critical threshold. The probabilities p 0 andp can be considered as properties of a probabilistic forecast and forecast system. They are both functions of two dimensions: these dimensions are the threshold level (of e.g., rainfall, temperature or wind) that defines bad weather, and the threshold probability from the single-stage cost-loss problem applied to Friday's decision. In principle one could imagine routinely calculating numerical approximations to these two-dimensional functions every time a forecast is created. Values could then be read off to solve specific extended cost-loss problems as they arise.
In the next section we will consider the special case of forecasts consisting of normal distributions, in which the calculation of p 0 andp becomes somewhat straightforward.
In the analysis above we considered the outcomes that follow Friday's forecast, and then moved back one step in time and considered them from the point of view of Thursday. This stepping backward in time is the essence of the dynamic programming approach to solving multistage decision problems (Bellman 1957) and can be extended to solve problems that involve many time steps, many forecast outcomes, and many possible decisions, as described in detail in Murphy et al. (1985) and Wilks (1991) and applied in, for example, McLay (2008).

The normal distribution case
To apply the extended cost-loss model derived above in the simplest possible context, we will now consider a forecast system that produces forecasts consisting of normal distributions that are made on Thursday and Friday for Saturday. We will first discuss the statistical properties of these forecasts in some detail before presenting methods that can be used for applying the decision-making framework. Our method is similar to the Markov chain approaches described in e.g., Regnier and Harr (2006) and McLay (2008), but applies the Markov chain to the mean rather than the probabilities, following Jewson and Ziehmann (2004), which allows for considerable simplification.

a. Forecast properties
Since they consist of normal distributions, both Thursday's and Friday's forecasts can be described using a mean and a standard deviation: the mean represents the best single forecast, and the standard deviation represents the uncertainty around that forecast. For the forecast made on Thursday we write the mean and standard deviation as m 2 and s 2 , and for the forecast made on Friday we write the mean and standard deviation as m 1 and s 1 , where s 1 , s 2 since we assume Friday's forecast is more accurate on average. We write the observation for Saturday as a (for ''actual'') and define the forecast errors as e 2 5 m 2 2 a and e 1 5 m 1 2 a.
We will assume that the forecasts are well calibrated, by which we mean that they cannot easily be improved by further statistical postprocessing based on past forecasts and past forecast errors. This leads us to make three calibration assumptions about the statistical properties of the forecast. The first calibration assumption is that the means of the forecasts are unbiased, and so E(ajm 1 ) 5 m 1 and E(ajm 2 ) 5 m 2 . Taking expectations of both these expressions and using the law of iterated expectations gives E(a) 5 E(m 1 ) and E(a) 5 E(m 2 ), from which we can deduce that E(m 1 2 m 2 ) 5 0 and E(e 1 2 e 2 ) 5 0 The second calibration assumption is that the standard deviations of the forecasts match the standard deviations of the actual forecast errors.
The third calibration assumption is slightly more complex. To introduce it, we first define the change in the mean forecast from Thursday to Friday as d 5 m 1 2 m 2 . Using the assumptions given above, E(d) 5 E(m 1 2 m 2 ) 5 0. We also note that d 5 m 1 2 m 2 5 (m 1 2 a)2(m 2 2 a) 5 e 1 2 e 2 .
At the point in time that the decision is being made on Thursday, Thursday's forecast, and hence m 2 and s 2 , are known. The details of how m 2 and s 2 are created are not relevant, as long as they satisfy the assumptions given above. For instance, s 2 could have been estimated simply from analysis of past forecast errors or could have been derived from a statistical calibration scheme that merges information from past forecast errors with information from the ensemble spread (Jewson et al. 2004;Gneiting et al. 2005).
Friday's forecast, however, will not be known on Thursday. We do nevertheless need to be able to estimate s 1 already on Thursday in order to estimate the variance of d, V(d), since V(d) is required for the algorithm described below. The simplest method for estimating s 1 on Thursday would be to use past forecast errors. Alternatively, one could investigate whether there might be information in Thursday's ensemble spread to help predict s 1 (i.e., to predict the uncertainty around the next forecast, given the current ensemble spread), although this has never, to our knowledge, been explored. Another approach would be to estimate V(d) directly from the ensemble spread: this approach has been considered in Jewson and Ziehmann (2004).
To derive an expression for V(d) we will assume, as the third calibration assumption, that the forecast error e 1 must be independent of the change in the forecast d. The justification for this assumption is that if this were not the case then, on Friday, having observed the change from m 2 to m 1 (and hence the value of d) one would have information about e 1 that would allow one to improve the forecast m 1 . We are assuming that any such improvements have already been made as part of the forecast calibration process, and hence that there is no longer any information about e 1 contained in d, and that d and e 1 must be independent. Writing e 2 5 e 1 2 d , we can take variances of both sides. Since e 1 and d are independent, by the argument above, there are no correlation terms on the RHS, giving: V(e 2 ) 5 V(e 1 ) 1 V(d) and hence V(d) 5 V(e 2 )2V(e 1 ) 5 s 2 2 2 s 2 1 , and in this way, we are now able to calculate the variance of the change d from the variances of the forecast errors. This variance is used in the algorithm described below. A final set of assumptions we make, in addition to the assumptions related to good forecast calibration given above, is that the forecast errors e 1 and e 2 , and the change in the forecast d, are all normally distributed.
The assumption that e 1 and d are independent is analogous to the efficient market hypothesis in economics, which says that stock price changes are mostly unpredictable. More details are given Wilmott et al. (1995), Baxter and Rennie (1996), and Hull (2017).

b. Solutions for p 0 andp
We now describe how we can calculate p 0 andp given a forecast with the properties described above, which can then be used to make the cancel-or-wait decision. We present two methods: the first is based on simulation, and the second on numerical integration. The two methods have pros and cons. The simulation method is intuitively simple and forms a framework that could readily be extended to encompass different distributions and more complex decision problems, such as including distributions to capture the uncertainty in the parameters s 1 and s 2 . The numerical integration method, on the other hand, is faster and more accurate, but would be more difficult to extend. For the problems we study below in sections 4 and 5 we have found that the simulation method is both fast enough and accurate enough, given standard computing resources. However, in the extension to multiple time steps simulation methods might be prohibitively slow and numerical integration might be preferable.
To define good and bad weather we will assume there is a given threshold value u of the forecast variable that separates bad weather from good weather, with values higher than u giving bad weather. An example would be temperature, where values above a given high threshold (i.e., a heatwave) may lead to the cancellation of an event.

1) SIMULATION SOLUTION
Given the forecasts defined above we now describe a simulation algorithm that can be run on Thursday for calculating p 0 andp for this forecast system. The simulation algorithm estimates p 0 andp in a conceptually straightforward way by simulating many possible versions of Friday's forecast for Saturday, given the information available on Thursday, and calculates p 0 andp from these many simulated forecasts.
We start by considering the forecast mean on Thursday, m 2 , and in the first part of the simulation method we model how the forecast mean might change from Thursday to Friday, i.e., how m 2 changes to m 1 . Since we know that E(m 1 2 m 2 ) 5 0, we have E(m 1 ) 5 E(m 2 ) 5 m 2 (this latter step because on Thursday m 2 is no longer random but is fixed by Thursday's forecast) and we see that the distribution of possible values for m 1 will be centered around m 2 . We also know V(d), the variance of the change in the forecast means, from Eq. (2) above, and we have assumed that d is normally distributed. As a result we can model the distribution of values that m 1 might take on Friday as a normal distribution with mean m 2 and variance V(d), which we write as N[m 2 , V(d)]. Each possible value of m 1 in this distribution corresponds to a possible probability forecast on Friday, consisting of a normal distribution centered around that value of m 1 with standard deviation s 1 . We are modeling a distribution of possible distributions for Friday's forecast.
This leads to an algorithm that can be used on Thursday for the calculation of p 0 andp, as follows: 1) Derive s 1 and s 2 , either from analysis of past forecast errors, or from the ensemble spread, or both. We will assume single values for both s 1 and s 2 , rather than distributions, although the method could be generalized to deal with distributions of parameter uncertainty by simulating over different values for s 1 and s 2 . Continue only if s 1 , s 2 .
where Q should be chosen large enough for good convergence of the results of the algorithm. 4) For each of the Q simulated values of m 1 , use the corresponding forecast N(m 1 , s 2 1 ) to calculate a value of p 1 (the probability of exceeding u), using the cumulative distribution function (CDF) for the normal distribution. 5) Count how many of the Q values of p 1 exceed p crit , to give R. 6) Estimate p 0 as R/Q. 7) For each of the Q 2R forecasts for which p 1 does not exceed p crit , calculate the probability that the forecast variable exceeds u, using the CDF for the normal distribution. 8) Estimatep as the mean of these Q 2R probabilities.
In this normally distributed case, we see that p 0 andp are easily derived from modeling the distribution of possible future forecast distributions. This, in turn, is derived from an understanding of possible changes in the mean forecast, which in turn is derived from knowledge of the properties of the forecast errors.
The model used for the change in forecast means given above can be described mathematically as a single step of a random walk, or as a type of Markov process known as a martingale. Similar models can be used to model Brownian motion (in physics), or the logarithm of the changes in stock prices (in economics). A more detailed discussion of using random walks for both stock prices and expected weather outcomes is given in Jewson et al. (2005), and for stock prices in many standard finance textbooks such as Wilmott et al. (1995), Baxter andRennie (1996), andHull (2017).

2) NUMERICAL INTEGRATION SOLUTION
We now describe how p 0 andp can, alternatively, be calculated using analytical expressions and numerical integration, which would be faster and more accurate, but less easy to generalize. We start by deriving an expression for p 0 .
On Friday, the probability of bad weather on Saturday is given by p 1 . Given values for m 1 and s 1 (with s 1 assumed to be a single value) and using the assumption that the forecast consists of a normal distribution, p 1 can be written using the cumulative distribution function of the standard normal distribution F as 1 minus the probability of good weather, giving: If p 1 . p crit then the event will be canceled. The function F is monotonically increasing, and so this will occur for large values of m 1 . Instead of using a threshold for p 1 we can therefore use a threshold for m 1 that we will call m crit , defined by Or the inverse: The decision to cancel can now be based on m 1 . m crit , instead of p 1 . p crit .
The probability density of m 1 given m 2 , as evaluated on Thursday, can be given in terms of the probability density of the standard normal distribution f, as . This is the distribution that we simulate from in step 3 of the simulation algorithm above.
The probability p 0 that m 1 will exceed m crit is equal to 1 minus the probability that m 1 will be less than m crit . The probability that m 1 will be less than m crit is given by the CDF of the distribution of m 1 , evaluated at m 1 5 m crit , and so This expression for p 0 can be used to replace step 6 of the simulation algorithm. We now derive an expression forp. The probabilityp is the probability that the weather is bad, even if the event goes ahead, and we will write this as P(badjgoes ahead). Using the law of total probability, this can be decomposed into the integral over all possible values of m 1 aŝ p 5 P(badjgoes ahead) 5 ð P(badjm 1 )p(m 1 jgoes ahead) dm 1 , where P is used to indicate a cumulative probability and p is used to indicate a probability density. The first term inside the integral, P(badjm 1 ), is the cumulative probability of bad weather given m 1 , and is given by the expression for p 1 given above. The second term inside the integral, p(m 1 jgoes ahead), is the probability density of m 1 , given that we go ahead. The probability density p(m 1 jgoes ahead) is proportional to the probability density of m 1 given as p(m 1 jm 2 ) above in the range from minus infinity to m crit , and is zero elsewhere. With renormalization of [1/(1 2 p 0 )] to ensure that it is a probability density, the probability density p(m 1 jgoes ahead) is thus given by Putting these together in the integral giveŝ This expression can be evaluated using numerical integration as replacement for steps 7 and 8 in the simulation algorithm.

A synthetic forecast example
We now test the extended cost-loss decision algorithm using synthetic randomly generated forecasts and observations with appropriate statistical properties. There are two reasons for first using synthetic, rather than real, forecast data. First, by using synthetic data we can test the logic of the extended costloss decision framework and the normal distribution implementation algorithm, without also having to test whether any particular real forecast dataset fits the assumptions made in the derivation of the implementation algorithm. Second, by using synthetic forecasts we can derive results that are presumably as good as the results from the decision framework could ever be, because the data can be constructed so that the assumptions will be satisfied perfectly. These results can then be used as a benchmark against which to compare results from real forecast data. We use the simulation algorithm rather than the numerical integration solutions because we have the ambition to extend the model to allow for distributions for s 1 and s 2 , and to forecast distributions other than normal.

a. Constructing the synthetic forecast dataset
The synthetic data need to satisfy various statistical properties in order to represent real forecasts sufficiently realistically. There are four conditions that the synthetic data need to meet: the synthetic forecasts and forecast errors need to be normally distributed, the forecasts need to be unbiased, the mean-square error (MSE) values have to be realistic in the sense that the 1-day forecast should on average be more accurate than the 2-day forecast, and finally e 1 and d need to be uncorrelated, following the discussion in section 3a above.
To achieve these properties the synthetic forecasts and observations are created using the following steps, which work by first simulating Thursday's forecast, then Friday's forecast conditional on Thursday's forecast, and then the observations conditional on Friday's forecast. Simulating in this order makes it straightforward to create synthetic forecast data with the properties required, although other simulation methods could also be used such as a single step of simulation from a three-dimensional multivariate normal with appropriate means and covariances. 1) Assign values to s 1 and s 2 , the forecast error standard deviations, with s 1 , s 2 .
2) Calculate V(d) using V(d) 5 s 2 2 2 s 2 1 . 3) Simulate D values for Thursday's forecast mean m 2 using N(0, s 2 2pop ), where s 2 2pop is the variance of values of m 2 and can be estimated from past forecasts. 4) For each value of m 2 , simulate a corresponding value of Friday's forecast mean m 1 using m 1 5 m 2 1 d where d is simulated using N[0, V(d)]. 5) For each value of m 1 , simulate a corresponding value of the observation a using a 5 m 1 2 e 1 where e 1 is simulated using N(0, s 2 1 ). That the synthetic forecasts generated in this way have the required statistical properties can be demonstrated as follows: 1) Friday's forecast is unbiased because E(m 1 2 a) 5 E(e 1 ), and e 1 is simulated with mean zero. 2) Thursday's forecast is unbiased because E(m 2 2 a) 5 E(m 1 2 d 2 a) 5 E(e 1 ) 2 E(d), and both e 1 and d are simulated with mean zero.
3) The e 1 and d are uncorrelated because they are simulated from independent normal distributions. 4) The MSE of Friday's forecast is s 2 1 because e 1 is simulated with variance s 2 1 . 5) The MSE of Thursday's forecast is s 2 2 because V(m 2 2 a) 5 V(m 1 2 d 2 a) 5 V(e 1 ) 1 V(d) 5 s 2 2 : We can also calculate the implied correlation between the two forecast means m 1 and m 2 , which is given by f1 1 [V(d)/s 2 2pop ] 1/2 g 21 . We use the above algorithm to simulate D 5 10 000 sets of two forecasts and one observation and we define ''bad weather'' to be temperatures over a threshold defined by the 70th percentile of the observed temperature distribution. To give a practical interpretation of this definition, one could imagine an event for which high temperatures on the day of the event might lead to immediate cancellation for health and safety reasons and would incur a loss in terms of refunds to paying participants. As a real-world example, the 2019 New York City triathlon, due to be held on 28 July 2019, was canceled at the last-minute because of a prediction of a heat wave, and all participants were refunded their entry fees (CNN 2019).
We use values of s 2pop 5 3.0118C, s 1 5 2.7678C, and s 2 5 3.5158C, giving V(d) 5 4.6978C 2 and a correlation between m 1 and m 2 of 0.582. These values are derived from the real forecasts used in the next section. The values could be very different for different forecast variables, and for different weather, seasonal and climate forecasts.
We compare the average utilities from applying the extended cost-loss model (which we label as extended) with that from three less sophisticated strategies, which are: always ignore Thursday's forecast and wait until Friday before making a decision using the single-stage cost-loss model (which we label as always-fc1); always decide on Thursday using the singlestage cost-loss decision model and then ignore Friday's forecast (which we label as always-fc2); and the more subtle strategy of using the single-stage cost-loss decision model on Thursday and then again on Friday if the event has not already been canceled (which we label as basic-twice). The basic-twice method is the most similar to the extended cost-loss decision model that we have derived, but neglects to take into account a proper analysis of the potential value of waiting for the next forecast when making Thursday's decision. This is taken into account in the extended model.

b. Synthetic forecast results
Results for C 1 in the range from 0.1 to 0.8 are shown in the eight panels in Fig. 2. In each case C 2 is varied such that the ratio C 1 /C 2 takes values from 1 to 1.6, while L is always fixed at 1. Figure 3 shows the results for extended-basic-twice, along with bootstrapped confidence intervals. We see in Fig. 2 that extended gives the best results overall, as would be expected from the theory and derivations given in the previous sections, and because it brings more information to bear on the decision. For almost all parameter settings tested it gives higher average utilities than the simpler methods, and Fig. 3 shows that the differences between extended and basic-twice are significant at the 95% level in most cases, especially for lower values of C 1 . In some cases, basic-twice gives slightly higher average utilities, but the differences are never significant. Basic-twice beating extended, but not in a significant way, can be explained by the two methods being very close for parameter settings that correspond to limiting cases, and that we have used a finite number of simulations. The relative ranking of the other methods varies with the parameter values: only extended is always in first or second place. This shows that extended does the best job of making use of the forecast data to make sensible decisions across a wide range of situations.
The results in Fig. 2 can be interpreted in more detail as follows. We start by considering the results for C 1 5 0.1 (the first panel in Fig. 2), which corresponds to canceling on Friday being cheap relative to the loss that might be incurred on Saturday: this makes canceling on Friday potentially attractive. When C 1 /C 2 is 1 (the left-hand end of the horizontal axis) cancellation on Thursday is no cheaper than canceling on Friday, and so there is no reason not to wait to Friday. As a result, always waiting to Friday to make the decision (always-fc1) gives as good results as extended. The other two decision methods perform less well, because they use Thursday's forecast, and this can only hinder making an optimal decision when canceling on Thursday is no cheaper than canceling on Friday.
For values of C 1 /C 2 greater than 1 cancellation on Thursday becomes cheaper than cancellation on Friday, and the decision becomes a complicated one. All of the factors now come into play: the various costs, the skill of the forecasts, and the logic by which any decision made on Thursday needs to take into account how the forecast might change between Thursday and Friday (and what decision that might lead to on Friday). As a result, extended, which is the only method that takes all these factors into account, beats or matches the other three methods.
The ranking of the simpler methods varies with C 1 /C 2 . As C 1 /C 2 increases from 1 to 1.1 always-fc1 is soon overtaken by basic-twice and then by always-fc2. This is because always-fc1 ignores Thursday's forecast, and this becomes increasingly unhelpful as canceling on Thursday becomes cheaper.
For the limiting case of large C 1 /C 2 , the results for extended, always-fc2, and basic-twice gradually converge, and only always-fc1 performs badly. This is because canceling on Thursday becomes very cheap, and the whole decision effectively becomes a question of whether to cancel on Thursday. Only methods that allow for that can do well.
Considering the other panels in Fig. 2: as C 1 increases, canceling on Friday becomes more expensive. Up to C 1 5 0.4 the results are qualitatively the same as they are for C 1 5 0.1. For C 1 . 0.4 the margin by which extended beats the other methods is reduced as canceling on Friday becomes more expensive and the value of making a good decision as to whether to wait until Friday is reduced.
For C 1 . 0.6 the results from the extended and basic-twice are very similar. This is because cancellation on Friday is so expensive that waiting to Friday to make the decision makes little sense: the decision can be made on Thursday. The subtle logic that extended uses to decide whether to wait until Friday becomes more or less irrelevant, and basic-twice actually does slightly better than extended in some cases (although not significantly so), presumably because of simulation noise.
Overall, we see from these results that extended always does well, but has the most impact in the situations in which all the options are potentially reasonable and the decisions on Thursday and Friday are both trade-offs. In the limiting cases in which canceling on Thursday or Friday is either cheap or expensive then the results from one or other of the simple methods are as good as extended. In a real situation, without detailed analysis, one would not know whether the parameters are in a limiting case and hence always using extended would make the most sense because it is the only method that works well in all cases.

A real forecast example
We now consider an example in which we apply the extended cost-loss decision framework derived in section 2, using the implementation algorithm described in section 3, to 30 years of real numerical model temperature reforecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) and station observations for Stockholm, Sweden, from the Swedish Meteorological and Hydrological Institute (SMHI). We use the same set of decision problems and parameter ranges as was used to test synthetic forecasts in the previous section. Testing with real forecasts and real observations in this way is a much tougher challenge because of the uncertainties and unknown biases in real data, relative to our assumptions.

a. Constructing the real forecast dataset
Our forecast and validation datasets consist of 36-h and 108-h ERA-interim forecasts initialized at 1200 UTC (Dee et al. 2011) and related observed temperature values from 1979 to 2018. We only consider the 92-day period from 1 June to FIG. 2. Average utilities over all forecast cases, for a range of parameter values in the decision problem, calculated from synthetic forecasts, and compared across the four decision-making methods. The blue lines show utilities for the always-fc1 model, the orange lines show utilities for the always-fc2 model, the green lines show utilities for the extended model, and the red lines show utilities for the basic-twice model. Vertical axis ranges vary across the panels. 31 August in each year. We use 1979-88 for calibration of the forecasts (giving 920 calibration cases) and 1989-2018 for outof-sample validation (giving 2760 validation cases). We use forecasts with a wide spacing in lead time to magnify the impact of the method, so that the benefits can be seen more clearly above the noise in the results. The forecasts are calibrated using linear regression by regressing the observed values onto both the forecast and the previous forecast. The regression parameters are considered constant in time. We calibrate the mean of the ensemble while the standard deviation of the forecasts is derived from past forecast errors from the calibration period. This calibration method would be sufficient to ensure good calibration (as defined in section 3 above) if the forecast errors are genuinely normally distributed and homogeneous, and the calibration adjustments required are genuinely constant in time. In reality, the forecast errors are unlikely to be exactly normally distributed, may exhibit nonhomogeneity, and the ideal adjustments would likely not be constant time. Dependent on the level of misfit of these approximations, this might be expected to impact the effectiveness of the decision algorithm for this dataset. The calibrated forecasts consist of a mean and standard deviation for each of the two lead times: m 1 , s 1 and m 2 , s 2 . The standard deviations of the forecasts, given by s 1 and s 2 , are s 2 5 3.5158C and s 1 5 2.7678C From s 1 and s 2 we calculate the variance of d as V(d) 5 4.6978C 2 . We again define bad weather to be temperatures over a threshold defined as the 70th percentile of the observed data.

b. Real forecast results
We compare the average utilities from applying the extended cost-loss model over the 2760 validation cases with that from the three less sophisticated strategies as before. The results for all four methods are shown in Fig. 4, with Fig. 5 showing extended-basic-twice with 95% confidence limits. The results are similar to those shown for the synthetic data in Fig. 2. Overall, extended does better than the other methods and is the only method that always places best or second best. Extended again performs best, relative to the other methods, for C 1 , 0.5 and almost all the results are significantly better than basic-twice in this range (see Fig. 5).
As for the synthetic data, extended does not give the highest utility score for every parameter setting. For example, for C 1 . 0.4 we can see that for some values of the ratio C 1 /C 2 basictwice does better than extended, but never significantly so. These are limiting cases in which the two methods give very similar results and basic-twice beating extended would seem to be due to noise in the data resulting from the finite sample size.

Discussion and conclusions
Probabilistic weather and climate forecasts can be used as input to decisions in various situations. The single-stage costloss model is an idealized form of a class of decisions that can be represented by an event organizer having to make a forecast-based decision by considering the trade-off between the cost of cancellation of an event one day in advance, and the risk of going ahead with the event and the weather turning out bad and causing a loss. Analogous situations, with the same logical structure, appear in many aspects of forecast-based decision-making, whether using weather forecasts, seasonal forecasts or climate projections. They also occur in many other branches of science, engineering, and economics.
The single-stage cost-loss model has previously been generalized in a number of ways, such as allowing for multiple possible actions (Murphy 1985), and multiple stages (Murphy et al. 1985;Epstein and Murphy 1987;Murphy and Ye 1990;Wilks 1991;Wilks et al. 1993;Wilks and Wolfe 1998;Regnier and Harr 2006;Roulin 2007;McLay 2011). In this study we have used a two-stage version of the model, involving two lagged forecasts, in which costs vary between the stages and there is a possibility of loss at the end of the second stage. We have considered the question of whether the forecast user should make their decision based on the first forecast or should postpone their decision until they see the second forecast. If, after consideration of the first forecast, a decision is made to wait, the second decision is then the same as that in the singlestage cost-loss model. Similar generalizations of the cost-loss model are considered by Murphy and Ye (1990) and Regnier and Harr (2006), and also arise in other fields.
We have analyzed the decide-or-wait decision that needs to be made based on the first forecast using expected utility. The process of using a forecast to make this decision requires the calculation of two new forecast quantities. One is p 0 , the probability (evaluated using the first forecast) that the probability (evaluated using the second forecast) of bad weather will exceed a critical probably p crit , where p crit is derived from the utilities of the different outcomes. The second new forecast quantity isp, the probability (evaluated using the first forecast) that, if (when we get the second forecast) we decide to go ahead with the event, the weather at the time of the event will nevertheless turn out bad.
These two quantities are nontrivial to calculate in general and require detailed modeling of the probabilities of the forecast system using Markov chains, as described in Regnier and Harr (2006) and McLay (2008). However, for the case in which forecasts and forecast changes consist of normal distributions, we have been able to derive a simpler method. In our method, the error statistics of the forecasts, which may already be known, can be used to derive the variance of forecast changes. The variance of forecast changes can in turn be used, when the first forecast becomes available, to run simulations of the possible distribution of probabilistic forecasts that will be produced in the second forecast. Based on this distribution of distributions, p 0 andp can easily be calculated. Solutions for p 0 andp based on numerical integration are also possible, which avoids the need for simulations.
We have tested our extended cost-loss decision algorithm on synthetic forecast data, and compared the utilities of the decisions that it makes with those made by three simple alternatives. The simple alternatives were 1) always decide using the second forecast, 2) always decide using the first forecast, and 3) decide using the first forecast, without taking into account that there will be a second forecast but then potentially reverse the decision using the second forecast (i.e., apply the single-stage cost-loss model twice on consecutive days). None of these simple methods correctly analyze the logic of whether to decide now or wait for the second forecast. The extended algorithm worked as expected and, overall, it gave better decisions than these simple alternatives. For some extreme parameter FIG. 4. Average utilities as in Fig. 2, but now based on real weather forecast data and observations for Stockholm. values it performed slightly less well than applying the singlestage cost-loss model twice on consecutive days (although not significantly so). We attribute this to the use of simulations in the testing and because in extreme parameter cases the subtle logic that the extended method uses becomes irrelevant, and simpler methods may be just as effective. The overall success of the extended method validates the underlying logic and also validates the effectiveness of the implementation algorithm for the case where the forecast errors are genuinely normally distributed and well calibrated. However, it is not surprising that the algorithm works well, since we are adding information to the decision process, and by using synthetic forecasts and observations everything except the randomness of the simulations is under our control.
We have also tested the method on real forecast data and real observations. This is a much more challenging test than using synthetic data, since real forecasts and observations will undoubtedly not follow our statistical assumptions precisely. To the best of our knowledge, this is the first time that the decisions from a multistage dynamic decisionmaking algorithm have been validated on real forecast data and real observations in this way. Once again, the extended cost-loss method outperformed simpler decision methods for most parameter values tested and performed best overall.
In summary, we have applied decision theory ideas and methods, specifically those related to dynamic programming, to the question of whether to decide now or wait for the next forecast, building on the work of multiple previous authors. In a situation in which waiting for the next forecast is an option, the probabilities in a probabilistic forecast are not enough to make a rational decision and need to be supplemented with additional information. In the case of well calibrated forecasts with normally distributed forecast errors, we have shown that the forecast MSE at the relevant lead times is sufficient to provide this information. The MSE can be used to derive the variance of the size of forecast changes, which in turn can be used to determine whether to wait for the next forecast. For nonnormal forecasts the picture is more complicated. For some nonnormal distributions there may be simplifying assumptions that can be used to derive simple algorithms as we have done for the normal distribution. In general, however, detailed nonparametric modeling of how forecast probabilities change from one lead time to the next is required. This general case has been considered by Regnier and Harr (2006) and McLay (2008).
One implication of our work, and prior work in this area, is that in order to realize the full potential of probabilistic forecasts for decision-making, forecast providers may need to consider providing additional forecast-change related information along with the forecasts that they supply. Forecast users could then use that information to make more rational decisions around the question of whether to wait for the next forecast. Those decisions might be made subjectively, with the additional information used as inputs, they might be made objectively using the extended cost-loss model we have described, or they might be made objectively using other decision theory models, as appropriate.