Paper
8 December 1978 Real Time Floating Point Computing-A Philosophy For Implementations
Charles M. Rader
Author Affiliations +
Abstract
The exact definition of floating point computation has always varied from one computer to another and from one implementation to another. However, three points are common to almost all systems in use today: 1) representation of numbers by a multiplier and an exponent, with fixed integer base, 2) restriction of multiplier magnitude range (l/b,l) for uniqueness, 3) unique representation of zero. To implement the four operations, addition, subtraction, multiplication and division, for floating point, whether in hardware or in software, a worst case time can be identified which, for addition and subtraction, is often much larger than the average expected execution time, and which must be provided for in a real time system. An examination of the loss in accuracy associated with approximations which reduce these worst case times has led to a re-examination of an old idea, unnormalized floating point arithmetic, in the light of availability of modern hardware. We find that a system something like unnormalized floating point arithmetic is just right for most signal processing applications.
© (1978) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Charles M. Rader "Real Time Floating Point Computing-A Philosophy For Implementations", Proc. SPIE 0154, Real-Time Signal Processing I, (8 December 1978); https://doi.org/10.1117/12.938244
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Signal processing

Computing systems

Signal attenuation

Binary data

Digital signal processing

Information technology

Electroluminescence

RELATED CONTENT


Back to Top