As we all know too well, building up a collaborative community around a software infrastructure is not easy. Besides recruiting enthusiasts to work as part of it, mostly for free, to succeed you also need to overcome a number of technical, sociological, and, to our surprise, some political hurdles. The ALMA Common Software (ACS) was developed at ESO and partner institutions over the course of more than 10 years. While it was mainly intended for the ALMA Observatory, it was early on thought as a generic distributed control framework. ACS has been periodically released to the public through an LGPL license, which encouraged around a dozen non-ALMA institutions to make use of ACS for both industrial and educational applications. In recent years, the Cherenkov Telescope Array and the LLAMA Observatory have also decided to adopt the framework for their own control systems. The aim of the “ACS Community” is to support independent initiatives in making use of the ACS framework and to further contribute to its development. The Community provides access to a growing network of volunteers eager to develop ACS in areas that are not necessarily in ALMA's interests, and/or were not within the original system scope. Current examples are: support for additional OS platforms, extension of supported hardware interfaces, a public code repository and a build farm. The ACS Community makes use of existing collaborations with Chilean and Brazilian universities, reaching out to promising engineers in the making. At the same time, projects actively using ACS have committed valuable resources to assist the Community's work. Well established training programs like the ACS Workshops are also being continued through the Community's work. This paper aims to give a detailed account of the ongoing (second) journey towards establishing a world-wide open source collaboration around ACS. The ACS Community is growing into a horizontal partnership across a decentralized and diversified group of actors, and we are excited about its technical and human potential.
KEYWORDS: Antennas, Optical correlators, Observatories, Phased arrays, Calibration, Interferometry, Data archive systems, Data processing, Telescopes, Current controlled current source
With the completion of the ALMA array, Development Projects are being initiated to expand the observatory’s
technical capabilities. The ALMA Phasing Project is one of the early ones, with the main goal of adding Very
Long Baseline Interferometry (VLBI) observation capabilities. This will enable ALMA to join observations with
other millimeter observatories having VLBI data capabilities around the globe. ALMA would therefore become
the most powerful millimeter VLBI station yet.
A minimal impact approach has been taken to cause as little overall work overhead at the observatory as
possible and integrate seamlessly with existing infrastructure. New hardware elements and software features are
being delivered in incremental cycles to the observatory, adhering to existing workflows.
This paper addresses one of the main software challenges of this project and its implementation: the continuous
phasing corrections of the ALMA antenna signals. As antenna signals are summed during the online
processing for correlation after the observation, a phased array is a key requirement for successful VLBI observations.
A new observing mode that inherits all of the existing interferometry functionality is the cornerstone of
this development. Further additions include new correlator protocols to modify the data flow, new VLBI specific
device controllers, online phase solvers and observation metadata adaptations. All of these are being added
to existing ALMA Software subsystems, taking advantage of the modular design and reusing as much code as
possible.
The design has included a strong focus on simulation capabilities to verify as much of the functionality as
possible without the need for sparse telescope time. The first on-site tests of the phasing loop using the ALMA
baseline correlator and antennas were performed in early 2014, and the hardware is expected to be completely
installed by the middle of the same year.
After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work.
The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.
The ALMA software is a large collection of modules, which implements the functionality needed for the observatory day-to-day operations, including among others Array/Antenna Control, Correlator, Telescope Calibration
and Data Archiving. Many software patches must periodically be applied to fix problems detected during operations or to introduce enhancements after a release has been deployed and used under regular operational
conditions. Under this scenery, it has been imperative to establish, besides a strict conguration control system,
a weekly regression test to ensure that modications applied do not impact system stability and functionality.
A test suite has been developed for this purpose, which reflects the operations performed by the commissioning
and operations groups, and that aims to detect problems associated to the changes introduced at different versions
of ALMA software releases. This paper presents the evolution of the regression test suite, which started at the
ALMA Test Facility, and that has been adapted to be executed in the current operational conditions. Topics
about the selection of the tests to be executed, the validation of the obtained data and the automation of the
test suite are also presented.
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least
66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000
m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals
are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the
observation metadata. This paper describes the progress in the development of the ALMA operation support software,
which aims to increase the efficiency of the testing, distribution, deployment and operation of the core observing
software. This infrastructure has become critical as the main array software evolves during the construction phase. In
order to support and maintain the core observing software, it is essential to have a mechanism to align and distribute the
same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and
strict configuration control. A build farm to provide continuous integration and testing in simulation has been established
as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of
each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year
consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to
data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation
support will be presented.
The ALMA Observatory is a challenging project in many ways. The hardware and software pieces were often designed
specifically for ALMA, based on overall scientific requirements. The observatory is still in its construction
phase, but already started Early Science observations with 16 antennas in September 2011, and has currently
(June 2012) 39 accepted antennas, with 1 or 2 new antennas delivered every month. The finished array will
integrate up to 66 antennas in 2014.
The on-line software is a critical part of the operations: it controls everything from the low level real-time
hardware and data processing up to the observations scheduler and data storage. Many pieces of the software are
eventually affected by a growing number of antennas, as more processes are integrated into the distributed system,
and more data flows to the Correlator and Database. Although some early scalability tests were performed in
a simulated environment, the system proved to be very dependent on real deployment conditions and several
unforeseen scalability issues have been found in the last year, starting with a critical number of about 15
antennas. Processes that grow with the number of antennas tend to quickly demand more powerful machines,
unless alternatives are implemented.
This paper describes the practical experience of dealing with (and hopefully preventing) blocking scalability
issues during the construction phase, while the expectant users push the system to its limits. This may also be
a very useful example for other upcoming radio-telescopes with a large number of receivers.
KEYWORDS: Antennas, Software development, Observatories, Optical correlators, Astronomy, Software engineering, Prototyping, Information technology, Solar thermal energy, Control systems
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
plan.
KEYWORDS: Observatories, Astronomy, Lanthanum, Internships, Software development, Telescopes, Computing systems, Control systems, Process modeling, Lead
Observatories are not all about exciting new technologies and scientific progress. Some time has to be dedicated
to the future engineers' generations who are going to be on the front line in a few years from now. Over
the past six years, ALMA Computing has been helping to build up and collaborating with a well-organized
engineering students' group at Universidad T´ecnica Federico Santa Maria in Chile. The Computer Systems
Research Group (CSRG) currently has wide collaborations with national and international organizations, mainly
in the astronomical observations field. The overall coordination and technical work is done primarily by students,
working side-by-side with professional engineers. This implies not only using high engineering standards, but
also advanced organization techniques.
This paper aims to present the way this collaboration has built up an own identity, independently of individuals,
starting from its origins: summer internships at international observatories, the open-source community, and
the short and busy student's life. The organizational model and collaboration approaches are presented, which
have been evolving along with the years and the growth of the group. This model is being adopted by other
university groups, and is also catching the attention of other areas inside the ALMA project, as it has produced
an interesting training process for astronomical facilities. Many lessons have been learned by all participants
in this initiative. The results that have been achieved at this point include a large number of projects, funds
sources, publications, collaboration agreements, and a growing history of new engineers, educated under this
model.
Code generation helps in smoothing the learning curve of a complex application framework and in reducing the
number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has
adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model
Driven code generation to transform directly an UML Model into a full implementation in the ACS framework.
This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower
handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are:
software reuse, implicit application of design patterns and automatic tests generation. A model driven approach
to design makes it also possible using the same model with different frameworks, by generating for different
targets.
The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator.
OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct
mapping of data types, the main difficulty encountered in the translation process. The output is an ACS
application readily usable by the developer, including the necessary deployment configuration, thus minimizing
any configuration burden during testing. The specific application code is implemented by extending generated
classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process
and aiding the developers by keeping a clean logical separation between the two.
Our first results show that code generation improves dramatically the code productivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.