KEYWORDS: Digital signal processing, Probability theory, Lead, Data modeling, Curium, Digital video discs, Performance modeling, Data storage, Systems modeling, Time metrology
Presently, digital continuous media (CM) are well established as an integral part of many applications. Scant attention has been paid to servers that can record such streams in real time. However, more and more devices produce direct digital output streams. Hence, the need arises to capture and store these streams with an efficient recorder that can handle both recording and playback of many streams simultaneously and provide a central repository for all data. Because of the continuously decreasing cost of memory, more and more memory is available on a large scale recording system. Unlike most previous work that focuses on how to minimize the server buffer size, this paper investigates how to effectively utilize the additional available memory resources in a recording system. We propose an effective resource management framework that has two parts: (1) a dynamic memory allocation strategy, and (2) a deadline setting policy (DSP) that can be applied consistently to both playback and recording streams, satisfying the timing requirements of CM, and also ensuring fairness among different streams. Furthermore, to find the optimal memory configuration, we construct a probability model based on the classic M/G/1 queueing model and the recently developed Real Time Queueing Theory (RTQT). Our model can predict (a) the missed deadline probability of a playback stream, and (b) the blocking probability of recording streams. The model is applicable to admission control and capacity planning in a recording system.
Presently, IP-networked real-time streaming media storage has become
increasingly common as an integral part of many applications. In recent years, a considerable amount of research has focused on the scalability issues in storage systems. Random placement of data blocks has been proven to be an effective approach to balance heterogeneous workload in a multi-disk environments. However, the main disadvantage of this technique is that statistical variations can still result in short term load imbalances in disk utilization, which in turn, cause large variances in latencies. In this paper, we propose a packet level randomization (PLR) technique to solve this challenge. We quantify the exact performance trade-off between our PLR approach and the traditional block level randomization (BLR) technique through analytical analysis. Our preliminary results show that the PLR technique outperforms the BLR approach and achieves much better load balancing in multi-disk storage systems.
Peer-to-peer (P2P) streaming is emerging as a viable communications paradigm. Recent research has focused on building efficient and optimal overlay multicast trees at the application level. However, scant attention has been paid to interactive scenarios where the end-to-end delay is crucial. Furthermore, even algorithms that construct an optimal minimum spanning tree often make the unreasonable assumption that the processing time involved at each node is zero. However, these delays can add up to a significant amount of time after just a few overlay hops and make interactive applications difficult. In this paper, we introduce a novel peer-to-peer streaming architecture called ACTIVE that is based on the following observation. Even in large group discussions only a fraction of the users are active at a given time. We term these users, who have more critical demands for low-latency, active users. The ACTIVE system significantly reduces the end-to-end delay experienced among active users while at the same time being capable of providing streaming services to very large multicast groups. ACTIVE uses realistic processing assumptions at each node and dynamically optimizes the multicast tree while the group of active users changes over time.
Consequently, it provides virtually all users with the low-latency
service that before was only possible with a centralized approach.
We present results that show the feasibility and performance of
our approach.
KEYWORDS: Local area networks, Error control coding, Video, Curium, Internet, Data storage, Forward error correction, Multimedia, Error analysis, Environmental sensing
Large-scale continuous media (CM) system implementations require scalable servers most likely built from clusters of storage nodes. Across such nodes, random data placement is an attractive alternative to the traditional round-robin striping. One benefit of random placement is that additional nodes can be added with low data-redistribution overhead such that the system remains load balanced. One of the challenges in this environment is the implementation of a retransmission-based error control (RBEC) technique. Because data is randomly placed, a client may not know which server node to ask for a lost packet retransmission. We design and implement three RBEC techniques that utilize the benefits of random data placement in a cluster server environment while enabling a client to efficiently identify the correct server node for lost packet requests. We implement and evaluate our techniques with a one-, two-, four-, and eight-way server cluster and across local and wide-area networks. Our results show the feasibility and effectiveness of our approaches in a real-world environment and also identify one solution as generally superior to the other two.
KEYWORDS: Error control coding, Video, Curium, Local area networks, Internet, Data storage, Multimedia, Switches, Computing systems, Computer programming
Large-scale continuous media (CM) system implementations require scalable servers most likely built from clusters of storage nodes. Across such nodes random data placement is an attractive alternative to the traditional round-robin striping. One benefit of random placement is that additional nodes can be added with low data-redistribution overhead such that the system remains load balanced. One of the challenges in this environment is the implementation of a retransmission-based error control (RBEC) technique. Because data is randomly placed, a client may not know which server node to ask for a lost packet retransmission.
We have designed and implemented a RBEC technique that utilizes the benefits of random data placement in a cluster server environment while allowing a client to efficiently identify the correct server node for lost packet requests. We have implemented and evaluated our technique with a one-, two-, and four-way server cluster and across local and wide-area networks. Our results show the feasibility and effectiveness of our approach in a real-world environment.
KEYWORDS: Performance modeling, Video, Digital signal processing, Data storage, Data modeling, Information operations, Compact discs, Acquisition tracking and pointing, Radon, Phase modulation
In a scalable server that supports the retrieval and display of continuous media, both the number of simultaneous displays and the expected startup latency of a display increases as a function of additional disk bandwidth. Based on a striping technique and around-robin placement of data, this paper describes object replication and request migration as two alternative techniques to minimize startup latency. In addition to developing analytical models for these two techniques, we report on their implementation using a scalable server. The results obtained from both the analytical models and the experimental system demonstrate the effectiveness of the proposed techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.