Latency refers to a short period of delay (usually measured in milliseconds) required for the conversion between analog and digital representations of the sound data. Devices such as computers can only process digital data. Thus, the analog data it receives on microphone or line-in inputs must be converted to digital data. After processing of data, the processed data must be converted back to an analog signal before it can be output to speakers and played back.
This conversion between analog and digital takes a short amount of time, which is known as latency. Although this process consumes a very small interval, it can have a cumulative effect if the data is handed off by several layers of software.
One example of latency is a musical keyboard connected to a computer. When the user hits a key, an audio signal, which is analog, is transferred along the connecting wire in the form of electrical current. The computer would then convert the signal to a digital format and process it according to any settings input by the user. After the processing is complete, the processed digital signal is converted to an analog sound wave (represented by current in the wire), which is then sent to the speaker.
Latency in Computer Audio
Latency can be a particular problem in current Microsoft Windows audio platforms, but is much less so in Apple's Mac OS X and most Linux operating systems[dubious â€“ discuss]. A popular workaround is Steinberg's ASIO, which bypasses these layers and connects audio signals directly to the sound card's hardware. Most professional and semi-professional audio applications utilize the ASIO driver, allowing Windows users to work with audio in real time, i.e. digital multitrack recording.
Latency in Broadcast
Audio latency can also be experienced in broadcast systems where someone is contributing to a live broadcast over a satel Read more about: Latency
Java Platform, Micro Edition, or Java ME, is a Java platform designed for mobile devices and embedded systems. Target devices range from industrial controls to mobile phones and set-top boxes. Java ME was formerly known as Java 2 Platform, Micro Edition (J2ME).
Java ME was designed by Sun Microsystems; the platform replaced a similar technology, PersonalJava. Originally developed under the Java Community Process as JSR 68, the different flavors of Java ME have evolved in separate JSRs. Sun provides a reference implementation of the specification, but has tended not to provide free binary implementations of its Java ME runtime environment for mobile devices, rather relying on third parties to provide their own.
As of 22 December 2006, the Java ME source code is licensed under the GNU General Public License, and is released under the project name phoneME.
As of 2008, all Java ME platforms are currently restricted to JRE 1.3 features and uses that version of the class file format (internally known as version 47.0). Should Sun ever declare a new round of Java ME configuration versions that support the later class file formats and language features, such as those corresponding JRE 1.5 or 1.6 (notably, generics), it will entail extra work on the part of all platform vendors to update their JREs.
Java ME devices implement a profile. The most common of these are the Mobile Information Device Profile aimed at mobile devices, such as cell phones, and the Personal Profile aimed at consumer products and embedded devices like set-top boxes and PDAs. Profiles are subsets of configurations, of which there are currently two: the Connected Limited Device Configuration (CLDC) and the Connected Device Configuration (CDC). Read more about: Java ME
Solaris is a UNIX-based operating system introduced by Sun Microsystems in 1992 as the successor to SunOS.
Solaris is known for its scalability, especially on SPARC systems, and for originating many innovative features such as DTrace and ZFS. Solaris supports SPARC-based and x86-based workstations and servers from Sun and other vendors, with efforts underway to port to additional platforms.
Solaris is certified against the Single Unix Specification. Although it was historically developed as proprietary software, it is supported on systems manufactured by all major server vendors, and the majority of its codebase is now open source software via the OpenSolaris project.
Usage with installation
Solaris can be installed from physical media or a network for use on a desktop or server.
Solaris can be interactively installed from a text console on platforms without a video display and mouse. This may be selected for servers, in a rack, in a remote data center, from a terminal server or even dial up modem.
Solaris can be interactively installed from a graphical console. This may be selected for personal workstations or laptops, in a local area, where a console may normally be used.
Solaris can be automatically installed over a network. System administrators can customize installations with scripts and configuration files, including configuration and automatic installation of third-party software, without purchasing additional software management utilities.
When Solaris is installed, the operating system will reside on the same system where the installation occurred. Applications may be individually installed on the local system, or can be mounted via the network from a remote system.
Usage without installation
Solaris can be used without separately installing the operating Read more about: Solaris
Voice over Internet Protocol (VoIP) is a general term for a family of transmission technologies for delivery of voice communications over IP networks such as the Internet or other packet-switched networks. Other terms frequently encountered and synonymous with VoIP are IP telephony, Internet telephony, voice over broadband (VoBB), broadband telephony, and broadband phone.
Internet telephony refers to communications services â€” voice, facsimile, and/or voice-messaging applications â€” that are transported via the Internet, rather than the public switched telephone network (PSTN). The basic steps involved in originating an Internet telephone call are conversion of the analog voice signal to digital format and compression/translation of the signal into Internet protocol (IP) packets for transmission over the Internet; the process is reversed at the receiving end.
VoIP systems employ session control protocols to control the set-up and tear-down of calls as well as audio codecs which encode speech allowing transmission over an IP network as digital audio via an audio stream. Codec use is varied between different implementations of VoIP (and often a range of codecs are used); some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs.
Voice over IP has been implemented in various ways using both proprietary and open protocols and standards. Examples of technologies used to implement Voice over Internet Protocol include:
* IP Multimedia Subsystem (IMS)
* Session Initiation Protocol (SIP)
* Real-time Transport Protocol (RTP)
A notable proprietary implementation is the Skype network. Other examples of specific implementations and a comparison between them are available in Comparison of VoIP software. Read more about: VOIP
MPEG-4 Part 14 or MP4 file format, formally, is a multimedia container format standard specified as a part of MPEG-4. It is most commonly used to store digital video and digital audio streams, especially those defined by MPEG, but can also be used to store other data such as subtitles and still images. Like most modern container formats, MPEG-4 Part 14 allows streaming over the Internet. A separate hint track is used to include streaming information in the file. The official filename extension for MPEG-4 Part 14 files is .mp4, thus the container format is often referred to simply as MP4.
Some devices advertised as "MP4 players" are simply MP3 players that also play AMV video and/or some other video format, and do not play the MPEG-4 part 14 format.
Almost any kind of data can be embedded in MPEG-4 Part 14 files through private streams. The registered codecs for MPEG-4 Part 12-based files are published on the website of MP4 Registration authority (mp4ra.org), but most of them are not widely supported by MP4 players. The widely-supported codecs and additional data streams are:
* Video: MPEG-4 Part 10 (also known as H.264/MPEG-4 AVC), MPEG-4 Part 2, (other compression formats are less used: MPEG-2, MPEG-1).
* Audio: Advanced Audio Coding (AAC - a subpart of MPEG-4 Part 3), (other compression formats are less used: MPEG-1 Audio Layer III (MP3), Apple Lossless, MPEG-4 Part 3 Audio Object Types: Audio Lossless Coding (ALS), Scalable Lossless Coding (SLS), MPEG-1 Audio Layer II (MP2), MPEG-1 Audio Layer I (MP1), CELP, HVXC (speech), TwinVQ (very low bitrates), Text To Speech Interface (TTSI), SAOL (MIDI) and others.
* Subtitles: MPEG-4 Timed Text (also known as 3GPP Timed Text). Some private stream examples include Nero's use of DVD subtitles (Vobsub) in MP4 files.