In Optical Burst Switching (OBS), data packets at the edge of the network are aggregated into larger units identified as
Data Bursts (DBs). When switching is performed synchronously in OBS core nodes (slotted switching), each DB has to
be segmented at the switch input into fixed size units. Each unit is switched to a designated output in an "optical" time
slot (a parameter of the "optical" switch). The purpose of our study is to give recommendations concerning the optimum
size of these optical slots so as to minimize the overhead arising from the segmentation process. In order to estimate the
total overhead due to this process we take into account the statistical distribution of the burst size, the possible padding
of the last burst segment (to completely fill an optical slot) and the overhead due to the optical-slot preamble.
A mechanism that is often proposed for Quality-of-Service differentiation in Optical Burst-Switched networks is offset-time management. In this paper we identify and explain some undesired characteristics of this mechanism. The most important finding is that the burst drop probability differentiation that is attained for a given offset-time value strongly depends on the distribution of the burst durations. Hence control of the differentiation is difficult, since the distribution of burst durations is subject to changes all the time, depending on the traffic conditions at the edge of the OBS network. We also found that offset-time management slightly increases the unfairness within the lower priority classes, in the sense that longer bursts are dropped more frequently than shorter ones.
We present an accurate model for burst traffic characteristics inside optical burst switched networks. At the edges of such networks, a number of IP packets are collected before being injected into the core as a single unit. The sizes of these so-called bursts critically impact the performance of the various network elements, so that a good understanding of their characteristics is an essential step in network performance engineering. A further abstraction of this assembly process results in a straightforward simulation model that can eliminate the need for packet level simulation, and thus reduce model complexity and simulation cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.