KEYWORDS: Stars, Satellites, Solar radiation models, Sensors, Data modeling, Data communications, Systems modeling, Ionization, Space operations, Signal detection
This paper covers research into an assessment of potential impacts and techniques to detect and mitigate cyber attacks that affect the networks and control systems of space vehicles. Such systems, if subverted by malicious insiders, external hackers and/or supply chain threats, can be controlled in a manner to cause physical damage to the space platforms. Similar attacks on Earth-borne cyber physical systems include the Shamoon, Duqu, Flame and Stuxnet exploits. These have been used to bring down foreign power generation and refining systems. This paper discusses the potential impacts of similar cyber attacks on space-based platforms through the use of simulation models, including custom models developed in Python using SimPy and commercial SATCOM analysis tools, as an example STK/SOLIS. The paper discusses the architecture and fidelity of the simulation model that has been developed for performing the impact assessment. The paper walks through the application of an attack vector at the subsystem level and how it affects the control and orientation of the space vehicle. SimPy is used to model and extract raw impact data at the bus level, while STK/SOLIS is used to extract raw impact data at the subsystem level and to visually display the effect on the physical plant of the space vehicle.
There is a key for anticipatory tools and techniques to assist command staff in Intelligently Preparing the Battlespace by
predicting and assessing adversary and neutral courses-of-action in a manner that enable the rapid diffusion of
undesirable military or socio-political situations. This paper discusses the development of an Adversary Prediction
Environment (APE) that will provide this capability by leveraging soft computing techniques and grid computing
resources to provide an environment that allows for rapid exploration and analysis of enemy COAs for a given set of
scenarios. The APE accomplishes these capabilities by utilizing prediction capabilities present in our DSAP (Dynamic
Situation Awareness and Predictive (DSAP) environment to apply operationally focused simulation through Joint
SemiAutomated Forces (JSAF) to evaluate plan effectiveness. The paper discusses our efforts to identify prospective
scenarios and define a library of basic adversary and neutral force plans, actions, and adversary objectives that can be
used to model adversary behavior for the identified scenario. The paper also covers our efforts to modify DSAP in
support of an APE proof-of-concept that can be used to simulate and rank adversary plans.
RAM Laboratories is developing a more advanced real-time update capability for both the predictive and stateestimation
features of its Dynamic Situational Awareness and Predictive Framework and its underlying Multiple
Replication Framework in support of the Air Force Research Laboratory's Joint Synthetic Environment for Research and
Development. The overall goal of the DSAP Infrastructure is to allow Commanders and their staff at Air Operations
Centers the ability to perform "what-if" analysis of plans and alternatives "on-the-fly" while continually augmenting the
real-time picture sensor inputs with simulated state-estimated assessments.
This paper discusses design and implementation efforts to provide a Dynamic Situational Awareness capability utilizing
embedded simulations calibrated by real-time C4I inputs to estimate the state of unobservable elements of an operational
picture. Specifically this paper will discuss enhancements via a Calibrated Real-time Simulation component, a real-time
simulation component along with the process for providing real-time updates to running simulations.
RAM Laboratories and AFRL are developing a software infrastructure to provide a Dynamic Situation Assessment and
Prediction (DSAP) capability through the use of an embedded simulation infrastructure that can be linked to real-time
Command, Control, Communications, and Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) sensors
and systems and Command and Control (C2) activities. The resulting capabilities will allow Commanders to evaluate
and analyze Courses of Action and potential alternatives through real-time and faster-than-real-time simulation via
executing multiple plans simultaneously across a computing grid. In order to support users in a distributed C2
operational capacity, the DSAP infrastructure is being web-enabled to support net-centric services and common data
formats and specifications that will allow it to support users on the Global Information Grid. This paper reviews DSAP
and its underlying Multiple Replication Framework architecture and discusses steps that must be taken to play in a
Service-Oriented Architecture.
For Air Operations Centers, there is a need to provide Commanders and their staff with real-time, up-to-the-second
information regarding Red-Force, Blue-Force, and neutral force status and positioning. These updates of the real-time
picture provide Command Staff with dynamic situational awareness of their operations while considering current and
future Courses of Action (COAs). A key shortfall in current capability is that intelligence, surveillance, and
reconnaissance (ISR) sensors, electronic intelligence, and human intelligence only provide a snapshot of the operational
world from "observable" inputs. While useful, this information only provides a subset of the entire real-time picture. To
provide this "missing" information, techniques are required to estimate the state of Red, Blue, and neutral force assets
and resources. One such technique for providing this "state" information is to utilize operationally focused simulation to
estimate the unobservable data. RAM Laboratories and the Air Force Research Laboratory's Information Systems
Research Branch are developing a Dynamic Situation Assessment and Prediction (DSAP) Software Framework that, in
part, utilizes embedded real-time simulation in this manner.
This paper examines enhancements made to the DSAP infrastructure's Multiple Replication Framework (MRF) and
reviews extensions made to provide estimated state information via calibrated real-time simulation. This paper also
provides an overview of the Effectiveness Metrics that can be used to evaluate plan effectiveness with respect to the realtime
inputs, simulated plan, and user objectives.
Technological advances and emerging threats reduce the time between target detection and action to an order of a few minutes. To effectively assist with the decision-making process, C4I decision support tools must quickly and dynamically predict and assess alternative Courses Of Action (COAs) to assist Commanders in anticipating potential outcomes. These capabilities can be provided through the faster-than-real-time predictive simulation of plans that are continuously re-calibrating with the real-time picture. This capability allows decision-makers to assess the effects of re-tasking opportunities, providing the decision-maker with tremendous freedom to make time-critical, mid-course decisions.
This paper presents an overview and demonstrates the use of a software infrastructure that supports DSAP capabilities. These DSAP capabilities are demonstrated through the use of a Multi-Replication Framework that supports (1) predictivie simulations using JSAF (Joint Semi-Automated Forces); (2) real-time simulation, also using JSAF, as a state estimation mechanism; and, (3) real-time C4I data updates through TBMCS (Theater Battle Management Core Systems). This infrastructure allows multiple replications of a simulation to be executed simultaneously over a grid faster-than-real-time, calibrated with live data feeds. A cost evaluator mechanism analyzes potential outcomes and prunes simulations that diverge from the real-time picture. In particular, this paper primarily serves to walk a user through the process for using the Multi-Replication Framework providing an enhanced decision aid.
Recent technological advances and emerging threats greatly compress the timeline between target detection and action to an order of a few minutes. As such, decision support tools for today's C4I systems must assist commanders in anticipating potential outcomes by providing predictive assessments of alternate Courses Of Action (COAs). These assessments are supported by faster-than-real-time predictive simulations that analyze possible outcomes and re-calibrate with real-time sensor data or extracted knowledge in real-time. This capability is known as a Dynamic Situation Assessment and Prediction (DSAP) capability. This capability allows decision-makers to assess the effects of re-tasking opportunities, providing the decision-maker with tremendous latitude to make time-critical, mid-course decisions.
This paper details the development of a software infrastructure that supports a DSAP capability for decision aids as applied to a Joint Synthetic Battlespace for Research and Development (JSB-RD). This infrastructure supports capabilities that allow objects to be dynamically created, deleted and reconfigured, allows simulations to be calibrated with live data feeds, and provides a reduction in overheads for simulations in order to execute faster-than-real-time in order to provide a predictive capability. In particular, this paper will focus on a Multiple Replication Framework that can be used to support a DSAP infrastructure.
Developing models for simulation is an arduous task. After building a high fidelity model, computation time can be prohibitive for general testing due to processing at higher levels of resolution. One way to address this problem is to develop abstract representations of the models that only consider “key” variables or parameters. For identifying these “key” variables or parameters, it may be desirable to determine the sensitivity of certain variables with respect to model outputs or response. One way of calculating the sensitivity of variables requires the analysis of output variables using clustering techniques. The MRMAide technology (MRMAide stands for Mixed Resolution Modeling Aide) employs a sensitivity analysis as an enabling technology that allows the program to test the sensitivity of certain variables and analyze the correlation of coupled variables. Using this tool helps the developer analyze how a model can be abstracted so that it can be rewritten to reduce the number of calculations but keeping an acceptable level of accuracy. Distributions can then be fed into these variables rather than calculating their values at each step resulting in a lower fidelity, yet fairly accurate representation for given operating conditions.
The modeling and simulation community has developed many processes for certifying the creditability of simulation models. Many of these processes are implemented throughout the model development cycle and require comparison of the developed model with the system being simulated. With the increased focus on model reuse, these processes need to be tailored to address the development cycle of reusing existing creditable models. This paper outlines the validation process that certifies the creditability of simulation models produced by the MRMAide (Mixed Resolution Modeling Aide). MRMAide is a technology that semi-automates the development of model wrappers. These wrappers are used to resolve fidelity differences between models using mixed resolution modeling (MRM) techniques that allow for the reuse of existing simulation models. The MRMAide validation process extrapolates on existing processes that are implemented throughout the development cycle of a simulation model and addresses MRMAide's development processes.
In today's environment of programming, many people do not program with plug-and-play components or mixed resolution modeling in mind. Yet much of the programming word today is involved in the redevelopment of models. Simulation models are not necessarily programmed in such a way that they easily plug into different programs. The development of the enabling technology named MRMAide is creating a user-friendly, and faster way to integrate models. It has three distinct advantages: 1) reuse of models in other simulations, 2) can plug in low fidelity models for back of the envelope calculations and verification, and 3) can plug high fidelity model into a low fidelity simulation. MRMAide is a gui based tool for C++ applications. This paper presents the concept and results of wrapping code so that mixed resolution modeling can be accomplished with less coding. The examples build from basic concepts to complex architectures. The first example is a unit conversion problem. The original program is written in feet and another program does some of the same calculations, but everything is done in inches. This example can be extrapolated into SI units vs. English units. Another example takes a military simulation and connects a new function to it. The current function takes no arguments but the plugged-in function requires azimuth, elevation, and a boolean for launch status. This requires the creation of stubs, using probability distributions, to feed values into the system. The final example is a high fidelity simulation in which a low fidelity model is plugged.
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
KEYWORDS: Computer simulations, Defense and security, Computing systems, Computer architecture, Modeling and simulation, Systems modeling, Process modeling, Receivers, Telecommunications, Testing and analysis
SPEEDES, the Synchronous Parallel Environment for Emulation and Discrete Event Simulation, is a software framework that supports simulation applications across parallel and distributed architectures. SPEEDES is used as a simulation engine in support of numerous defense projects including the Joint Simulation System (JSIMS), the Joint Modeling And Simulation System (JMASS), the High Performance Computing and Modernization Program's (HPCMP) development of a High Performance Computing (HPC) Run-time Infrastructure, and the Defense Modeling and Simulation Office's (DMSO) development of a Human Behavioral Representation (HBR) Testbed. This work documents some of the performance metrics obtained from benchmarking the SPEEDES Simulation Framework with respect to the functionality found in the summer of 2001. Specifically this papers the scalability of SPEEDES with respect to its time management algorithms and simulation object event queues with respect to the number of objects simulated and events processed.
The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.
This paper provides an overview of each of the layers contained in the SPEEDES architecture. SPEEDES is a simulation framework that promotes interoperability, portability, efficiency, flexibility, and maintainability for High Performance Computing applications. Specifically, SPEEDES targets parallel and distributed platforms via its advanced time management schemes and shared memory communications structures. SPEEDES currently supports a large user base centered in the DOD simulation community. This paper describes several of the layers and features of the SPEEDES Simulation Framework. In addition, this paper discusses some of the most recent advances to the SPEEDES framework including its Federation Object (FO) System and its support for HLA via the SPEEDES-HLA Gateway.
While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.
Clustering algorithms are useful whenever one needs to classify an excessive amount of information into a set of manageable and meaningful subsets. Using an analogy from vector analysis, a clustering algorithm can be said to divide up state space into discrete chunks such that each vector lies within one chunk. These vectors can best be thought of as sets of features. A canonical vector for each region of state space is chosen to represent all vectors which are located within that region. The following paper presents a survey of clustering algorithms. It pays particular attention to those algorithms that require the least amount of a priori knowledge about the domain being clustered. In the current work, an algorithm is compelling to the extent that it minimizes any assumptions about the distribution of vectors being classified.
Today's modeling and simulation community is faced with the problem of developing and managing large complex system models comprised of a diverse set of subsystem component models. These component models may be described using varying amounts of detail and fidelity as well as differing modeling paradigms. Often, a complex simulation comprised of high fidelity subcomponent models may result in a more detailed system model than the simulation objective requires. Simulating such a system model results in a waste of simulation time with respect to addressing the simulation goals. One way to avoid wasting simulation cycles is to reduce the complexity of subcomponent models while not affecting the desired simulation objective. The process of reducing the complexity of these subcomponent models is known as abstract modeling. Abstract modeling reduces the subcomponent model complexity by eliminating, grouping, or estimating model parameters or variables at a less detailed level without grossly affecting the simulation results. One key issue concerning model abstracting is identifying the variables or parameters that can be abstracted away for a given simulation objective. This paper presents an approach to identifying candidate variables for model abstraction when considering typical C4ISR (Command, Control, Computers, Communications, Intelligence, Surveillance, and Reconnaissance) hardware systems.
KEYWORDS: Systems modeling, Data modeling, Performance modeling, Control systems, Network architectures, Mathematical modeling, Stochastic processes, Associative arrays, Clocks, Standards development
Mixed-resolution modeling methods are used for developing large complex system models from subsystem models, where each subsystem model may be described at varying levels of detail and complexity. The usefulness of creating mixed- resolution system models is that existing validated or legacy component models can be integrated into the overall system models, regardless of the level of detail. This process eliminates the need for creating or validating additional models of a system at a single specific level of detail. Mixed-resolution modeling methods are being utilized in the development of mission, campaign and theater level models. However, mixed resolution modeling techniques are being successfully employed at lower levels of the modeling spectrum, particularly in the area of hardware/software design. This paper presents mixed resolution-modeling techniques as they apply to backplane based computing systems. The mixed-resolution modeling techniques presented are used to create interface wrappers that handle information and timing differences between dataflow and functional modeling paradigms. The solutions required to resolve these information and timing differences in the engineering domain are similar to the problems found at the theater, campaign, and mission levels.
This paper investigates the effects of thermal light-scattering fluctuations and demonstrates that these are the dominant noise source inherent in photorefractive and electro-optic media. Fundamental noise limits to dynamic range and channel capacity are determined. Two sources of light-scattering fluctuations are examined: (1) thermal fluctuations in the space-charge field, which induce corresponding fluctuations in the dielectric constant through the electro-optic effect, and (2) fluctuations associated with the optical Kerr effect. Calculations are present for BaTiO3 and several other materials and are discussed in light of recent experimental measurements of dynamic range. Our results suggest a very large dynamic range for photorefractive materials (120 - 140 dB) that should prove useful for optical signal processing applications.
This paper presents an examination of power dissipation in a nonlinear optical medium during the coherent transfer of energy from a pump to a signal beam, and quantitatively relates this dissipation (through the fluctuation-dissipation theorem) to the spectrum of thermal fluctuations that give rise to light-scattering noise. Calculations are presented for two-wave mixing in an artificial Kerr media using a powerful stochastic model for computer simulation of amplitude, phase, and intensity fluctuations due to light-scattering noise.
This paper examines thermal (light-scattering) fluctuations as a dominant noise source in
nonlinear optical processes. Fundamental limits to conjugate wave fidelity and signal power
requirement are obtained through the statistical thermodynamic treatment of light-scattering
noise in four-wave mixing based on the fluctuation-dissipation theorem. Several types of
nonlinear media are examined including artificial Kerr suspensions, isotropic Kerr media, and
fluids near a critical point. Where measurements are available, excellent quantitative
agreement is obtained between theory and experiment.
Collinear microwave phase conjugation was observed in an artificial Kerr medium consisting
of short graphite fibers suspended in a binary liquid mixture. Using an 18 GHz pump beam
with up to 20 W continuous power, characterization of the changes in the 94 GHz refractive
index were made by interferometry. A nonperturbative method for describing the response of
the medium was used to analyze the phase-shift measurements for the static berefringence and
the time response as functions of microwave intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.