Satellite-based rainfall estimates (SRE) have become a promising data source to overcome some limitations of ground-based rainfall measurements, in particular for hydrological and other environmental applications. This study evaluates the spatial and temporal performance of four long-term SRE products (TMPA 3B42v7, CHIRPSv2, MSWEPv1.1 and MSWEPv2.2) over the complex topography and climatic gradients of Chile. Time series of precipitation measured at 371 stations are compared against the corresponding grid cell of each SRE (in their original spatial resolution) at different temporal scales (daily, monthly, seasonal, annual). The modified Kling-Gupta efficiency along with its three individual components were used to assess the performance of each SRE, while two categorical indices (POD, and fBIAS) were used to evaluate the skill of each SRE to correctly capture different precipitation intensities.
Results revealed that all SREs performed best in Central-Southern Chile (32.18-36.4°S), in particular at lowand mid-elevation zones (0-1000 m a.s.l.). Seasonally, all products performed best in terms of KGE0 during the wet autumn and winter seasons (MAM-JJA) compared to summer (DJF). In addition, all SREs were able to correctly identify no rain events, but during rainy days all SREs that did not use a local dataset of precipitation to recalibrate their estimates presented a low skill in providing an accurate classification of different precipitation intensities.
Overall, MSWPEPv22 showed the best performance at all time scales and country-wide, due to the use of a Chilean dataset of daily data for calibrating its precipitation estimates, making it a good candidate for hydrological applications in Chile. Finally, we conclude that when the in situ precipitation dataset used in the evaluation of different SREs does not cover the headwaters of the catchments, the obtained performances should only be considered as first guess about how well a given SRE represent the real amount of water in an area.
The environmental conditions that allow optimal astronomical observations are often coupled with sites that are far away from human settlements and of difficult access, implying limited infrastructure availability that translates in excessive costs and limited bandwidth. With the availability and more affordability of optical based technologies, the astronomical scientific community, alone or joining forces with other actors, has managed in the last decade to boost the communication capability available to several of the astronomical installations in the northern Chilean region, the Atacama Desert, and to successfully increase the efficiency and effectivity of the existing Observatories and setting the basis for the coming ones. The paper, after providing a short summary of the projects developed to enable better communications and the future initiatives currently foreseen, focuses on the following show‐cases, from users that differ in size and aims, in served communities and in geographical locations: a) the observation of the First Light from Gravitational Wave Source (ESO, ALMA, et Al.); b) the use of virtual presence to bring the observer where things happen (ESO/PARANAL); c) remote operations for robotic installation (OCA); d) Contributing to develop the local environment (REUNA); e) provide the “muscle” for the current and future data challenge (ALMA). These examples, by illustrating how communication transformed the way research and education are done, demonstrate that improved communication is paramount in achieving better and, in some case, new astonishing results, both in terms of science and as well as enriching the communities, both scientific and in general.
KEYWORDS: Antennas, Software development, Observatories, Optical correlators, Astronomy, Software engineering, Prototyping, Information technology, Solar thermal energy, Control systems
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
plan.
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North
America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter
wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA
Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware,
software, commissioning and operational procedure.
At the ATF emphasis has shifted from hardware testing to software and operational functionality. The support of the
varied goals of the ATF requires stable control software and at the same time flexibility for integrating newly developed
features. For this purpose regression testing has been introduced in the form of a semi-automated procedure. This
supplements the established offline testing and focuses on operational functionality as well as verifying that previously
fixed faults did not re-emerge.
The regression tests are carried out on a weekly basis as a compromise between the developers' response- and the
available technical time. The frequent feedback allows the validation of submitted fixes and the prompt detection of sideeffects
and reappearing issues. Results of nine months are presented that show the evolution of test outcomes, supporting
the conclusion that the regression testing helped to improve the speed of convergence towards stable releases at the ATF.
They also provided an opportunity to validate newly developed or re-factored software at an early stage at the test
facility, supporting its eventual integration. Hopefully this regression test procedure will be adapted to commissioning
operations in Chile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.