Current efforts aimed at detecting and identifying Near Earth Objects (NEOs) that pose potential risks to Earth use
moderately sized telescopes combined with image processing algorithms to detect the motion of these objects. The
search strategies of such systems involve multiple revisits at given intervals between observations to the same area of the
sky so that objects that appear to move between the observations can be identified against the static star field. Dynamic
Logic algorithm, derived from Modeling Field Theory, has made significant improvements in detection, tracking, and
fusion of ground radar images. As an extension to this, the research in this paper will examine Dynamic Logic's ability
to detect NEOs with minimal human-in-the-loop intervention. Although the research in this paper uses asteroids for the
automation detection, the ultimate extension to this study is for detecting orbital debris. Many asteroid orbits are well
defined, so they will serve as excellent test cases for our new algorithm application.
KEYWORDS: Databases, Network security, Computer security, Manufacturing, Solids, Systems modeling, Video processing, Control systems, Colon, Data mining
This paper identifies an innovative middle tier technique and design that provides a solid layer of network
security for a single source of human resources (HR) data that falls under the Federal Privacy Act. The
paper also discusses functionality for both retrieving data and updating data in a secure way. It will be
shown that access to this information is limited by a security mechanism that authorizes all connections
based on both application (client) and user information.
There has been a lack of investigations related to low yield explosives instigated by terrorist on small but high occupancy buildings. Also, mitigating the threat of terrorist attacks against high occupancy buildings with network equipment essential to the mission of an organization is a challenging task. At the same time, it is difficult to predict how, why, and when terrorists may attack theses assets. Many factors must be considered in creating a safe building environment. Although it is possible that the dominant threat mode may change in the future, bombings have historically been a favorite tactic of terrorists. Ingredients for homemade bombs are easily obtained on the open market, as are the techniques for making bombs. Bombings are easy and quick to execute. This paper discusses the problems with and provides insights of experience gained in analyzing small scale explosions on older military base buildings. In this study, we examine the placement of various bombs on buildings using the shock wave simulation code CTH and examine the damage effects on the interior of the building, particularly the damage that is incurred on a computer center. These simulation experiments provide data on the effectiveness of a building's security and an understanding of the phenomenology of shocks as they propagate through rooms and corridors. It's purpose is to motivate researchers to take the seriousness of small yield explosives on moderately sized buildings. Visualizations from this analysis are used to understand the complex flow of the air blasts around corridors and hallways. Finally, we make suggestions for improving the mitigation of such terrorist attacks. The intent of this study is not to provide breakthrough technology, but to provide a tool and a means for analyzing the material hardness of a building and to eventually provide the incentive for more security. The information mentioned in this paper is public domain information and easily available via the internet as well as in any public library or bookstore. Therefore, the information discussed in this paper is unclassified and in no way reveals any new methodology or new technology.
Vulnerabilities are a growing problem in both the commercial and government sector. The latest vulnerability information compiled by CERT/CC, for the year ending Dec. 31, 2002 reported 4129 vulnerabilities representing a 100% increase over the 2001 [1] (the 2003 report has not been published at the time of this writing). It doesn’t take long to realize that the growth rate of vulnerabilities greatly exceeds the rate at which the vulnerabilities can be fixed. It also doesn’t take long to realize that our nation’s networks are growing less secure at an accelerating rate. As organizations become aware of vulnerabilities they may initiate efforts to resolve them, but quickly realize that the size of the remediation project is greater than their current resources can handle. In addition, many IT tools that suggest solutions to the problems in reality only address "some" of the vulnerabilities leaving the organization unsecured and back to square one in searching for solutions. This paper proposes an auditing framework called NINJA (acronym for Network Investigation Notification Joint Architecture) for noninvasive daily scanning/auditing based on common security vulnerabilities that repeatedly occur in a network environment. This framework is used for performing regular audits in order to harden an organizations security infrastructure. The framework is based on the results obtained by the Network Security Assessment Team (NSAT) which emulates adversarial computer network operations for US Air Force organizations. Auditing is the most time consuming factor involved in securing an organization's network infrastructure. The framework discussed in this paper uses existing scripting technologies to maintain a security hardened system at a defined level of performance as specified by the computer security audit team. Mobile agents which were under development at the time of this writing are used at a minimum to improve the noninvasiveness of our scans. In general, noninvasive scans with an adequate framework performed on a daily basis reduce the amount of security work load as well as the timeliness in performing remediation, as verified by the NINJA framework. A vulnerability assessment/auditing architecture based on mobile agent technology is proposed and examined at the end of the article as an enhancement to the current NINJA architecture.
The research discussed in this paper is a continuation of the author's previous research published in SPIE's Visual Information Processing Proceedings of 2003 [1] entitled "Improving the Performance of Content-Based Image Retrieval Systems". The SPIE article discussed a new method for clustering an image database based on level one similarity using a new technique called the "enumeration of gradient states". This technique is based on the direction of the gradient (converting the gradient into pixel moments and computing a value known as the "gradient spin excess" for determining the complexity level of an image). This complexity level was used for clustering images into similarity groupings. From this similarity grouping or clustering, level one similarity retrieval was improved by searching each cluster for the proper membership in a cluster rather than searching the whole database. This article expands the previous study with a theoretical discussion showing that complexity based clustering using gradient spin excess is directly related to the degree of randomness (entropy) of pixel moments. In addition, we propose an improved gradient states methodology by calculating the pixel moments of an image at various sub-block sizes and clustering the image database based on hierarchical clustering using level one similarity. Finally it is shown theoretically as well as experimentally that the speed of similarity retrieval is of complexity O(n), a definite improvement over the traditional color histogram (L1-norm) similarity retrieval method.
Our world is dominated by visual information and a tremendous amount of such information is being added day-by-day. It would be impossible to cope with this explosion of visual data, unless they are organized such that we can retrieve them efficiently and effectively. At the core of content-based image retrieval (CBIR) is the requirement that database elements must be indexed to facilitate retrieval in an efficient manner. Most existing image retrieval systems are text-based, but images frequently have little or no accompanying textual information. Problems with text-based access to images have prompted increasing interest in the development of image-based solutions. On the other hand, CBIR relies on the characterization of primitive features such as color, shape, and texture that can be automatically extracted from images themselves. Hence, the field of CBIR focuses on intuitive and efficient methods for retrieving images from a database based solely on the content contained in the images. This paper introduces a novel clustering methodology based on the gradient of images coupled with information theory (entropy) derived from statistical mechanics of "spin-up" and "spin-down" states to improve the speed of retrieval and improve the accuracy of retrieval in comparison to the traditional color histogram L1-norm retrieval methodology. By expanding the interpretation of color in images to include a gradient-based description in conjunction with information theory, a new indexing method for content-based retrieval of images from an image database is developed for the reduction of false positives in the retrieval process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.