The area to be cropped in irrigation districts needs to be planned according to the available water resources to avoid agricultural production loss. However, the period of record of local hydro-meteorological data may be short, leading to an incomplete understanding of climate variability and consequent uncertainty in estimating surface water availability for irrigation area planning. In this study we assess the benefit of using global precipitation datasets to improve surface water availability estimates. A reference area that can be irrigated is established using a complete record of 30 years of observed river discharge data. Areas are then determined using simulated river discharges from six local hydrological models forced with in situ and global precipitation datasets (CHIRPS and MSWEP), each calibrated independently with a sample of 5 years extracted from the full 30-year record. The utility of establishing the irrigated area based on simulated river discharge simulations is compared against the reference area through a pooled relative utility value (PRUV). Results show that for all river discharge simulations the benefit of choosing the irrigated area based on the 30 years of simulated data is higher compared to using only 5 years of observed discharge data, as the statistical spread of PRUV using 30 years is smaller. Hence, it is more beneficial to calibrate a hydrological model using 5 years of observed river discharge and then to extend it with global precipitation data of 30 years as this weighs up against the model uncertainty of the model calibration.
As water becomes scarce, efficient decision-making based on solid information becomes increasingly important (Svendsen, 2005). Solid information on climate variability and climate change is key to adequately estimating the availability of water for human livelihoods, the environment and agricultural development (Kirby et al., 2014, 2015), especially for irrigated agriculture, which by volume is the largest user of freshwater (de Fraiture and Wichelns, 2010). Available climatological records used for estimation of water resource availability in the irrigation sector are, however, often short (Kaune et al., 2017), and may not be representative of the full distribution of climate variability. This may particularly be so in developing countries, where the need to develop irrigation areas is greatest and can lead to sub-optimal decisions, such as the overestimating or underestimating of the area that can be planted. Local authorities deciding on the irrigated area clearly prefer to use the true record of climate variability to estimate the adequate irrigation area to be able to justify their decision based on expected economic benefits, but these records may often be short.
Recent studies show that hydrological information from remote-sensing datasets can be effectively used for estimation of surface water availability (Peña-Arancibia et al., 2016), for water accounting (Karimi et al., 2013) and to help improve detection of droughts at basin scale (Linés et al., 2017). Combined with local data, these datasets can potentially provide improved information to support decisions in irrigated agriculture. Global hydrological models have been used to estimate the river discharge at basin level for the development of irrigated areas and to assess the risk of water scarcity (Kaune et al., 2018), and although these show promising results in large basins, the use of a calibrated local hydrological model may be more suitable in smaller basins (López López et al., 2016) as a finer spatial resolution may then be used and local hydrological processes better represented.
Workflow of the study to determine the pooled relative utility value using different irrigation areas obtained from in situ, CHIRPS and MSWEP precipitation datasets.
Such local models will typically require some level of calibration, and the challenge is to calibrate these when the period of record of the observed data from available in situ stations is limited. If the period of record is short, then the data may not provide full representation of the true climatic variability, and the water resource estimate will be conditional on whether the available data are from a relatively wet, normal or relatively dry period. This is particularly relevant in climates that are influenced by phenomena such as the El Niño–Southern Oscillation (ENSO).
Using hydrological models forced by a longer period of record from available precipitation datasets may help improve discharge estimates for reliably determining the irrigated area, as the climatic variability can be better represented. However, model uncertainty, as well as the uncertainty of the representativeness of the model given the data used in model calibration, will need to be taken into account. Recently, several global precipitation datasets have become available, based on remote sensing as well as re-analysis models, with periods of record spanning 30 plus years. Examples include the CHIRPS precipitation dataset (Funk et al., 2015), which integrates in situ meteorological data and global earth observations, and the recently developed MSWEP precipitation dataset (Beck et al., 2017b), which integrates in situ meteorological data, global earth observations and the ERA-Interim re-analysis datasets. Both have been widely used to assess water availability and the risk of water scarcity and drought events (López López et al., 2017; Shukla et al., 2014; Toté et al., 2015; Veldkamp et al., 2015).
Despite the opportunities these modern datasets offer, they have largely been neglected by the irrigation sector for the estimation of water resource availability and variability (Turral et al., 2010), which relies primarily on in situ datasets, even when the availability of these datasets is often limited. Assessing the potential benefit of combining data from available in situ stations, global earth observations and reanalysis datasets to better estimate surface water availability can therefore be of considerable value to irrigation managers.
In this paper we hypothesize that the simulated river discharge for a period of record of 30 years using a calibrated local model forced by datasets such as CHIRPS or MSWEP provides more reliable estimates of water resource availability and the area to be irrigated than when considering the shorter time series of observed discharge that is used to calibrate the model. This is evaluated through an extended version of the hydro-economic Expected Annual Utility framework that determines the value of using each of the different datasets in determining the areas that can be irrigated as a function of the estimated availability of water.
Map of the Coello and Cucuana river basins and the Coello irrigation district, and their location in the Magdalena macro-basin in Colombia. The points indicate discharge stations and the squares indicate meteorological stations.
The pooled relative utility value, PRUV, used in this study is defined as a joined vector of six samples of the relative utility value. This value includes the irrigation areas for river discharge simulations derived using different precipitation datasets, the monthly probability of water scarcity using these areas, and the potential yield reduction due to water deficit for rice. The workflow of this study is shown in Fig. 1.
We apply our analysis to the Coello Irrigation District in Colombia. The
Coello Irrigation District is an existing irrigation district located in the
upper Magdalena basin, in the Tolima Department, a region subject to
considerable climate variability and that is vulnerable to droughts (IDEAM,
2015). The average monthly temperature in the Coello District is
28
The water available for irrigation depends on the total discharge of two
rivers from neighbouring mountainous basins: the Coello and Cucuana rivers
(Fig. 2). The Coello basin has an area of 2000 km
In situ precipitation and temperature data were obtained from the network of
meteorological stations operated by the Instituto de Hidrología,
Meteorología y Estudios Ambientales (IDEAM), the Colombian
hydro-meteorological institute, and interpolated to a gridded dataset with
0.1
Two global precipitation datasets were considered: (i) the Climate Hazards
Group InfraRed Precipitation with Station data (CHIRPS; Funk et al., 2015)
and (ii) the Multi-Source Weighted-Ensemble Precipitation (MSWEP; Beck et
al., 2017). CHIRPS precipitation is a remotely sensed and ground-corrected
dataset available globally at 0.05
KGE,
All precipitation, temperature and potential evapotranspiration datasets are
available for the 1983–2012 period. A preliminary evaluation of the global
precipitation datasets was done. The global precipitation datasets (CHIRPS
and MSWEP) were compared against in situ data in the selected basin. The
performance indicators Kling–Gupta efficiency (KGE), percentage of bias
(
Daily river discharge data for the 1983–2012 period were obtained from the stations operated by IDEAM at gauging station Payande (21217070) in the Coello River.
The Dynamic Water Balance Model (Zhang et al., 2008), a lumped conceptual
hydrological model based on the Budyko framework (Budyko, 1974), was selected
to simulate the river discharge in the Coello basin at a monthly timescale.
The Dynamic Water Balance Model has been applied in several basins around the
world (Kaune et al., 2015; Kirby et al., 2014; Tekleab et al., 2011; Zhang et
al., 2008), showing reliable river discharge simulations at a monthly
timescale. The model has a simple structure without routing, simulating the
basin hydrological processes with a reduced number of parameters. There are
only four model parameters: basin rainfall retention efficiency
In this study, surface water availability for irrigation was established as
the discharge in the Coello River, considering an environmental flow of
25 % from the available water resources. An average maximum soil moisture
storage capacity
Obtaining hydrological model simulations from the six samples of 5 years of observed river discharge.
The hydrological model was forced with the different precipitation datasets
(described in detail in Sect. 2.2). Although river flow data
were available for the full 1982
through 2012 period, to explore the influence of limited availability of
observed discharge data, six independent samples of 5 years were extracted
from the 30-year dataset (1983–1987, 1988–1992, 1993–1997, 1998–2002,
2003–2007 and 2008–2012). Each sample of 5 years was used for calibration
of the model parameters (Fig. 4). These samples were extracted as contiguous
samples of 5 years to represent different climatological periods, and were
applied to calibrate six sets of models, each using one of the observed
discharge samples. Preliminary Monte Carlo simulation was developed to obtain
the full period of samples and then extract each sample for calibration;
10000 model parameter sets (
Similar to Kaune et al. (2018), the area that can be
irrigated is determined based on an operational target monthly water supply
reliability (
The water availability distribution for each calendar month is established
using the multi-annual monthly river discharge, which may be obtained from
either the observed or simulated data. Given the small sample size of 30
years, the empirical distribution of water availability is obtained by
applying a bootstrap resampling with replacement procedure, with the size of
the bootstrap set at 25 000. The bootstrap resampling is applied for each
month for the sample of 30 water availability values (multi-annual monthly
values). From this sample we randomly draw
Evaluating the expected annual utility using the planned irrigation area from selected river discharge information relative to expected annual utility using the reference irrigation area.
A reference irrigated area is established using the empirical distribution
derived from the observed monthly river discharges of 30 years (1983–2012).
The areas that can be irrigated for each of the six calibrated models are
similarly determined but now using the discharge simulations for the full
30-year period. Irrigated areas are additionally obtained for the six 5-year
samples of observed discharge, and for comparison also using the 5-year
period of simulated discharges for each of the six calibrated models, where
the period is commensurate with the period used for calibration. For each
irrigation area that is obtained, the real probability of water scarcity is
determined using the observed surface water availability (which is also a
multi-annual monthly bootstrap resample), and the demand calculated using the
estimated area (Eq. 2).
The cost of choosing the irrigation area was evaluated with an extended
version of the hydro-economic framework developed by Kaune et al. (2018)
based on the economic utility theory (Neumann and Morgenstern, 1966). The
cost is calculated as the opportunity cost when the irrigation area is
selected to be too small, or the production loss due to water scarcity when
the irrigation areas are selected to be too large. When the area selected is
equal to the reference area, then the cost is zero. Similarly to Kaune et
al. (2018), the relative utility value, RUV, is used to compare the expected
annual utility between the reference and the irrigated area derived using
either the simulated discharge or the shorter 5-year observed discharge
sample (Eq. 3).
The expected annual utility
To determine the annual production loss
If RUV is equal to zero, then the expected annual utilities obtained with the reference and simulated irrigation areas are the same, and there is thus no cost associated with using the simulated information. A negative RUV entails an opportunity cost due to the planning of too small an irrigation area (defined as cost type 1). A positive RUV entails an agricultural loss due to the area being planned larger than can be supported by water availability and water shortages thus occurring more frequently than expected (defined as cost type 2). The statistical spread of RUV is derived from the bootstrap resample. The spread depends on the probability of water shortage being larger compared to the reference and on the yield response factor, entailing that the production loss incurred depends not only on the increased occurrence of water shortage, but also on the sensitivity of the crop to water deficit.
RUVs are pooled so as to give a PRUV to evaluate the cost of choosing the
irrigation area from the six possible irrigation areas obtained for a river
discharge simulation. This is done as it is not a priori clear, when only
5 years of observed data are available, from which part of the full
climatological record these may be. The PRUV is a concatenated vector of the
RUV obtained for each calibration sample (Eq. 7).
Similar to RUV, the PRUV is a hydro-economic indicator that can be larger than (cost type 2), equal to (no cost) or smaller than zero (cost type 1). The statistical spread of PRUV encompasses the variability of RUV among the six calibration samples. If the statistical spread of PRUV is large, then the variability of planned irrigation areas is large among samples. This means that the cost of choosing the irrigation area based on the available information is high. If on the other hand the statistical spread of PRUV is small, then the variability of planned irrigation areas is also small and the cost of choosing the irrigation area based on the available information is low.
Observed and simulated discharge for the Coello River at Payende
with 30 years (1983–2012) of CHIRPS precipitation (Sim
30 yr
The monthly observed and simulated discharges calculated with the different precipitation datasets from the calibration samples are shown in Fig. 5 (only CHIRPS with the samples for the 1993–1997 and 1998–2002 periods are shown) and in the Supplement (all samples and in situ and MSWEP). Discharge simulations change depending on which precipitation dataset is used as forcing and which sample is used to calibrate the hydrological model. In general, however, the mean discharge simulations show an overall agreement with observations.
The performance metrics for each month are shown in Fig. 6 and in the Supplement. In all months using the discharge simulations with different precipitation datasets, positive KGE values are obtained with the exception of simulations with MSWEP in April and November, which are both wet season months. In February (dry season) the highest KGE value is obtained using the simulations with observed precipitation (0.75). For all samples in February (dry season), the KGE value is higher for discharge simulations with observed precipitation and CHIRPS than those using MSWEP, with the exception of one sample (2008–2012).
In terms of
The correlation values vary among simulations and for each month. The correlation values range between 0.25 and 0.85. In February, using in situ precipitation, correlation values are above 0.6. Simulations with CHIRPS and MSWEP result in correlation values between 0.7 and 0.8 in February. The largest difference between correlations occurs in March (CHIRPS correlation is 0.5, MSWEP correlation is 0.6, and in situ correlation is 0.8).
Simulations with in situ precipitation and CHIRPS are found to behave similarly, which is not surprising as CHIRPS uses station-corrected data. MSWEP also includes station-corrected data, but they are derived in part from the ERA-Interim data which in themselves are not good at capturing convective precipitation (Leeuw et al., 2015). This explains the poor simulation performance with MSWEP in April and November as these are wet months in a tropical region with predominant convective precipitation.
As our work is focused on determining the critical irrigation area under monthly water scarcity, we are less concerned with the simulation performance in wet months, but focus rather on the more critical dry months (e.g. February), which have shown to perform well for the selected precipitation datasets.
KGE,
Irrigation areas obtained using different datasets of river discharge information in the Coello basin. The observed river discharge from the complete period of record of 30 years (1983–2012) is the reference information. The irrigation areas are obtained for an agreed water supply reliability of 75 % in any one month.
The areas that can be irrigated based on the water availability of the Coello
River are established using the simulated discharges from Sect. 3.1., a
defined environmental flow, a fixed demand rate per unit area cropped, and a
water supply reliability target of 75 %. Irrigation areas are established
for the reference discharge (observed 30 years); for each of the 30-year
discharge simulations using the models derived with each calibration sample;
as well as using the observed discharges for each of the six 5-year samples.
Finally, for comparison, irrigated areas are derived using only 5 years of
simulated data for each of the six 5-year samples, where the simulated 5
years are the same as the 5 years used in calibration. The areas that can be
irrigated given the simulated (or observed) discharges are found to vary
significantly when compared to the reference irrigated area (which was
established as 67.45 km
The areas that can be irrigated that are obtained using the observed
discharges for each of the six 5-year periods show relatively small variation
when compared to the reference area, ranging from 19 % smaller to
11 % larger. The average area of the six 5-year samples is slightly
smaller at 64.99 km
Probability of water scarcity using the reference irrigation area obtained with the observed river discharge of 30 years (Obs 30 yr) and the reference surface water availability. Probability of water scarcity using the irrigation area obtained with the observed river discharge of 5 years (Obs 5 yr) and the reference surface water availability. Boxplots show the median, interquartile range and minimum–maximum range.
The probabilities of water scarcity using the irrigation areas obtained for each of the simulated and observed discharges for the 5-year periods as well as for the reference are shown in Figs. 7 and 8 (samples 1993–1997 and 1998–2002) and in the Supplement (all samples). The probabilities of water scarcity using the irrigation area obtained using the observed discharges are shown in Fig. 7. As expected, the probability of water scarcity in February, which is the most critical month, shows a median value equal to 25 % and probabilities lower than 25 % for the other months when using the irrigation area obtained with the reference discharge (30 years). The spread of the probability of water scarcity indicated by the box–whiskers plot, showing the mean, interquartile range and minimum and maximum, is due to the distribution of the bootstrap, representing the uncertainty in the estimate due to the 30-year period of record.
Probability of water scarcity using the irrigation area obtained
with simulated river discharge information (Sim 30 yr
Figure 7 similarly shows the probability of water scarcity for irrigation areas obtained using observed discharges for the 5-year periods: 1993–1997 and 1998–2002 (results for the other four periods included in the Supplement). This shows that for the period 1993–1997, the median value is lower than 25 % for all months, while for the period 1998–2002 the median value is higher than 25 % for January and February. This reflects the smaller or larger irrigated areas established with each of these datasets. Figure 8 shows the probability of water scarcity for irrigation areas obtained using the discharge simulations of 30 years. The probability of water scarcity in February shows median values higher than 25 %, commensurate with the overestimation found in the hydrological model, with the exception of one simulation using observed precipitation, calibrated with the 1993–1997 sample of observed discharge data. Between April and June and in October to November, using the irrigation areas obtained with the discharge simulations, the probability of water scarcity is always found to be lower than 25 %, as these are the two wet seasons of the bimodal climate. For all samples, the probability of water scarcity is highest for the simulations using MSWEP precipitation. Using the irrigation areas obtained from the simulations calibrated with the 1983–1987 and 1998–2002 samples shows higher probabilities of water scarcity for all months when compared to the simulations calibrated with the other samples. This shows that these years were relatively wet, influencing discharge simulations and resulting in larger irrigation areas being selected. The pattern for sample 1993–1997 is more similar to the pattern found using the reference area found with the 30 years of observed discharge.
The probabilities of water scarcity for irrigated areas obtained with simulated discharges of only 5 years are shown in Fig. 8 (again, results for the 1993–1997 and 1998–2002 samples are shown, with the remaining four periods provided in the Supplement). The monthly probabilities of water scarcity show large differences between samples. In this case, four out of the six samples do not show a median probability of water scarcity higher than 25 % for any month, meaning that the irrigation area is underestimated compared to the reference. For the 1998–2002 sample, the probability of water scarcity is highest, with a median probability of water scarcity between 50 % and 75 % in February.
The annual expected utility is calculated using the economic return of the rice crop and the estimated yield determined using the irrigated areas established with the simulated discharge information, and the probability of water scarcity in each month for the 30-year period based on the observed discharges. Relative utility values are then found by comparing these against the annual expected utility calculated using the reference area and discharge information.
Figure 9 shows the relative utility values obtained for areas determined
using the 5-year samples of observed discharge for the 1993–1997 and
1998–2002 periods (again, the remaining four periods are provided in the
Supplement), with median estimates of
Relative utility values obtained for areas determined using discharge
simulations of 30 years are shown in Fig. 10 (and
in the Supplement), with median estimates between
For all samples, the relative utility values for simulations using the MSWEP
dataset are found to be largest, with values between 0.3 and 0.65, indicating
a higher production loss due to the higher probability of water scarcity. For
simulations using the 30-year observed precipitation, consistent median
values between 0.18 and 0.45 are obtained, with the exception of one sample
(
The relative utility values obtained using irrigated areas determined with
the simulated discharges of only 5 years (Fig. 10 and the Supplement) show median estimates between
Relative utility value using an observed river discharge of 5 years
for water scarcity happening independently in any one month.
For the 1993–1997 period, the RUV obtained for the irrigated area determined
with observed discharges of 5 years (
Relative utility value using simulated river discharge of 30 and
5 years for water scarcity happening independently in any one month.
A large statistical spread in RUV is found in months where the probability of water scarcity is higher than the reference. This is clearly shown for MSWEP simulations, which have the largest estimates of the irrigated area. For months where the probability of water scarcity is lower than the reference, the statistical spread in RUV is low. In these cases the statistical spread of RUV is a result only of the spread of the reference annual expected utility, resulting from the distribution of the probability of water scarcity. The statistical spread of the RUVs is lower when the simulated annual expected utility and the reference annual expected utility are more similar, which means that the RUV is closer to zero, as shown when using the 1993–1997 sample. An absence of statistical spread for the RUVs reflects zero probability of water scarcity in both the simulated and reference expected annual utilities.
Even though the probability of water scarcity is not the highest in November,
the statistical spread of the RUV is the largest when water scarcity happens
in that month. This is due to the high sensitivity of the crop to water
deficit (
Pooled relative utility value using observed river discharge of
5 years for water scarcity happening independently in February, May or
November.
The PRUV is obtained from the RUVs for each of the six samples in Sect. 3.4. In Figs. 11 and 12, the PRUV results for areas estimated using the observed discharges, and for the simulated discharges for 5 and 30 years, are shown for November, February and May. These are the representative months identified in Sect. 3.4, with similar results found for PRUVs when water scarcity happens independently in each month.
Pooled relative utility value using simulated river discharge of 5
and 30 years for water scarcity happening independently in February, May or
November.
The statistical spread of PRUVs represents the risk of randomly choosing one
irrigation area out of the six possible irrigation areas given by the six
calibration samples of 5 years. Results for the 5-year simulations show a
large statistical spread of the PRUVs, with the distribution positively
skewed. This skewness is due to the influence of one high RUV sample out of
the six RUV samples, resulting in a maximum positive PRUV for each
precipitation dataset, 0.18, 0.25, and 0.6. The statistical spread of PRUV
for the 5-year simulations with MSWEP precipitation is the largest among the
simulations, implying that the cost of choosing the irrigation area using
this dataset is the highest. Using 30 years of simulated discharges does
reduce the statistical spread in PRUV when compared to the 5-year
simulations. For observed precipitation, simulations with 5 years show the
range of PRUV to be between
The statistical spread in PRUV when using observed discharge of 5 years
(
Results of PRUV show that using the CHIRPS global precipitation dataset in discharge simulations reduces the risk of choosing the irrigation area compared to discharge simulations with in situ and MSWEP precipitation.
In the Coello basin we have the good fortune to have a long period of record of hydrological data (1983–2012) to use as a reference for establishing the climatological availability and variability of the available water resource. This may not be the case in other basins. Water resource estimation may then need to be done with the limited information that is available. To help understand the risk of estimating the available water resources when only limited information is available, we used observed discharge with a shorter period of record (5 years) to calibrate a local hydrological model and apply this to obtain simulated discharge with a longer period of record (30 years) either using a precipitation dataset based on observed data (Rodriguez et al., 2017) or a global precipitation dataset, including CHIRPS and MSWEP (Beck et al., 2017; Funk et al., 2015). We establish six samples of 5 years to calibrate the parameters of a hydrological model, and simulate six possible discharges of 30 years to imitate a setting where information about how representative the short record of available observed discharge is not known a priori. For each sample the annual expected utility is determined, including the monthly probability of (non-)water scarcity using different irrigation areas from different discharge simulations and the annual crop production with water scarcity (not) happening in a month. Positive and negative relative utility values were found with different discharge simulations. Positive values indicate a crop production loss due to unexpected water scarcity for too large an irrigation area being planned. Negative values indicate an opportunity cost due to the planning of too small an irrigation area.
Results show that the RUV varies depending on which month water scarcity
happens in. While the spread in the estimates of probability of water
scarcity is found to be largest in the month of February, the spread in RUV
is larger when water scarcity happens in November. This is due to the
difference in sensitivity of the crop yield to water deficit, depending on
the growing stage of the crop. In the Coello basin, rice has an average
growing length of 4 months and is sown during the entire year. This means
that if water scarcity does happen in a particular month, four different
growth stages will be affected each with a different yield reduction factor
resulting in an average yield reduction value. If water scarcity happens in
November, the average yield reduction value from the four growing stages of
rice is 1.4. This means that the average yield reduction in November under an
equal degree of water deficit is 1.75 times higher than in February
(
For an irrigated area selected based on the estimate of water availability using simulated discharges, a decision maker takes an additional risk due to not knowing a priori how representative the data used for calibrating the model are of climatic variability. This is why we introduce the pooled relative utility value, PRUV, in order to evaluate the risk of choosing an irrigation area derived from different river discharge simulations. If the statistical spread of PRUV is low (high), then the cost incurred by choosing an irrigated area based on the results of the simulations is equally low (high). The pooled relative utility value results using the global precipitation CHIRPS showed a lower cost in choosing the irrigation area compared to PRUV results using both a dataset based on observed precipitation as well as the MSWEP global precipitation dataset. This would suggest that the CHIRPS precipitation should be used instead of both observed and MSWEP precipitation when determining the surface water availability for irrigation area planning to avoid the risk of agricultural production loss due to a poorly chosen irrigated area that can be supported based on water availability. This is not a general conclusion, as it is closely related to how representative the precipitation dataset used is of the true precipitation amount and variability in the basin. The CHIRPS dataset does include observed data (Funk et al., 2015), which is similar to that used in our study to establish the in situ precipitation dataset. In that sense, it is also an interpolated dataset, but with additional information from the satellite. This may well provide additional detail on the variability of precipitation in a tropical mountainous basin such as the Coello.
It is important to mention the fact that CHIRPS and MSWEP are gauge corrected. This would mean that they would both be expected to perform quite well. However, the datasets used to correct each product may differ. That is the reason we compare the number of stations used in the Coello basin for each of the precipitation products (in situ, CHIRPS and MSWEP). This is helpful for discussing the PRUV results. Even though the number of stations used is lower for correction in the CHIRPS product (7 stations) compared to the number of stations used in the in situ product (14), the results indicate that the satellite information included in CHIRPS still provides a reasonable representation of the basin precipitation. For the MSWEP product the only three stations are used for correction, resulting in a poorer representation of the rainfall in the basin. In summary, the basin precipitation dataset derived from CHIRPS for the Coello basin is better than the MSWEP. The higher resolution of the CHIRPS dataset when compared to that of MSWEP no doubt also contributes in this medium-sized, mountainous basin. The poorer comparison of the MSWEP data we found not to be immediately obvious when evaluating the precipitation data using common indicators (e.g. KGE, bias), but was only found when evaluating the hydrological information for determining the irrigated area.
Interestingly, the performance of the model using the observed precipitation
dataset is similar to that of the model using the CHIRPS precipitation
dataset when considering common model performance statistics such as
Kling–Gupta efficiency (KGE), percentage bias (
An irrigation manager may be reluctant to use simulated information instead of the observed information until it is proven that the additional period of record of precipitation from for example global datasets compensates for the uncertainty of the use of a hydrological model. In that sense, PRUV results provide evidence that using discharge simulations with 30-year precipitation (CHIRPS) is equivalent to using observed discharge of 5 years as the risk of choosing the irrigation area is similar. As the period record of datasets such as CHIRPS increases, this risk will be expected to reduce further.
Using a longer period of record of observed discharges will help make better estimates of the irrigated area that can be supported by the available water resources, but when the availability or quality of observed discharge is limited, extending the period of record using model-based discharge simulations provides an alternative to estimating the area to be cropped. The results of the model used in the Coello basin also show that the overestimation or underestimation of the planned irrigation area depends in part on the model bias, particularly in the ability of the calibrated model to provide reliable simulations for low-flow periods, which are the most critical in this application. In the case presented here, we use a very simple model structure, and using simulated discharges from an enhanced model structure can be explored to obtain more accurate results.
We apply an extended hydro-economic framework to assess the benefit of using global precipitation datasets in surface water availability estimates to reduce the risk of choosing the area that can be irrigated with available water resources based on limited available information. We estimate irrigation areas using observed river discharge with a period of record of 30 years (reference), and simulated river discharges from a hydrological model forced with in situ and global precipitation datasets (CHIRPS and MSWEP). The hydrological model is calibrated using independent observed river discharge samples of 5 years extracted from the reference time period of 30 years to emulate a data-scarce environment, as well as the uncertainty of the available data with a short period of record being fully representative of climate variability. The relative utility value of using a particular dataset is determined based on the reference and simulated annual expected utility, which includes the monthly probability of (non-)water scarcity using the irrigation areas obtained and the annual crop production with water scarcity (not) happening in a month. The monthly probability of water scarcity will depend on the true (reference-observed) water resource availability. Additional production losses are incurred if the irrigation area planned is too large, as then water scarcity conditions will occur more frequently (cost type 2), while too small an area will result in an opportunity cost (cost type 1). The production loss also depends on how sensitive the crop is to water deficit in a particular month. The benefit of using either the in situ, CHIRPS or MSWEP datasets in reducing the cost of choosing the irrigation area, irrespective of the available sample of observed data used in calibrating the model, is evaluated through a pooled relative utility value, a joined estimate of the relative utility value of the samples of 5 years.
In the Coello basin in Colombia where the framework was applied, it was found that while the performance metrics of the discharge simulations relate to the relative utility value, the pooled relative utility value provides a complete hydro-economic indicator to assess the risk of choosing the irrigation area based on observed or simulated discharge data. We find that for the Coello basin, the CHIRPS precipitation dataset is more beneficial than in situ or MSWEP precipitation, as the risk of choosing the irrigation area is lower due to a better estimate of climate variability. For all precipitation datasets evaluated, using a dataset with a length of 30 years leads to a lower risk when compared to using a length of only 5 years. The risk of choosing the irrigation area based on discharge simulations with 30 years of CHIRPS precipitation is found to be similar to using the observed discharge of 5 years. Hence, extending the period of record using an extended precipitation dataset to provide a longer record of discharge simulations (from 5 to 30 years) compensates for the model uncertainty of the model calibration.
In the Coello basin, the global precipitation data CHIRPS are recommended instead of global precipitation data from the MSWEP dataset for estimating surface water availability to support the planning of irrigation areas. This dataset provides a good representation of the climatic variability in this medium-sized tropical basin, in part due to the correction of the dataset using observed station data. While the performance of the available global precipitation datasets would need to be evaluated, the application of the extended hydro-economic framework using global precipitation datasets to force a locally calibrated hydrological model is shown here to support decisions on adequate selection of irrigated areas in Colombia, and can be applied in data-scarce basins around the world. Ensuring the use of adequate hydrological information for the estimation of surface water availability will promote improved decisions for irrigation area planning and prevent economic losses.
The location and availability of all the data (e.g. local and global precipitation and observed discharge) and model (Dynamic Water Balance Model – Zhang et al., 2008) used for this research are indicated in the paper, including references and links to repositories.
The supplement related to this article is available online at:
The authors declare that they have no conflict of interest.
This article is part of the special issue “Integration of Earth observations and models for global water resource assessment”. It is not associated with a conference.
This work received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 603608, Global Earth Observation for Integrated Water Resource Assessment (eartH2Observe). We would like to thank IDEAM for providing the discharge and precipitation data.
This research has been supported by the European Commission (grant no. EARTH2OBSERVE (603608)).
This paper was edited by Gianpaolo Balsamo and reviewed by two anonymous referees.