What is a “PDA” ?

May 24, 2016

PDA (Personal Digital Assistant) – A mobile device (Also known as a handheld PC, or personal data assistant) that functions as a personal information manager. The term evolved from Personal Desktop Assistant, a software term for an application that prompts or prods the user of a computer with suggestions or provides quick reference to contacts and other lists. PDAs were largely discontinued in the early 2010s after the widespread adoption of highly capable, in particular iOS and Android-based, smartphones.

Nearly all PDAs have the ability to connect to the Internet. A PDA has an electronic visual display, enabling it to include a web browser, all models also have audio capabilities enabling use as a portable media player, and also enabling most of them to be used as mobile phones. Most PDAs can access the Internet, intranets or extranets via Wi-Fi or Wireless Wide Area Networks. Most PDAs employ touchscreen technology.

The first PDA was released in 1984 by Psion, the Organizer. Followed by Psion’s Series 3, in 1991, which began to resemble the more familiar PDA style. It also had a full keyboard. The term PDA was first used on January 7, 1992 by Apple Computer CEO John Sculley at the Consumer Electronics Show in Las Vegas, Nevada, referring to the Apple Newton. In 1994, IBM introduced the first PDA with full mobile phone functionality, the IBM Simon, which can also be considered the first smartphone. Then in 1996, Nokia introduced a PDA with full mobile phone functionality, the 9000 Communicator, which became the world’s best-selling PDA. The Communicator spawned a new category of PDAs: the “PDA phone”, now called “smartphone”. Another early entrant in this market was Palm, with a line of PDA products which began in March 1996. The terms “personal digital assistant” and “PDA” apply to smartphones but are not used in marketing, media, or general conversation to refer to devices such as the BlackBerry, iPad or iPhone.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is “P2P” ?

May 17, 2016

P2P (Peer-to-Peer) – In computing or networking it is a distributed application architecture that partitions tasks or work loads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.

Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model in which the consumption and supply of resources is divided. Emerging collaborative P2P systems are going beyond the era of peers doing similar things while sharing resources, and are looking for diverse peers that can bring in unique resources and capabilities to a virtual community thereby empowering it to engage in greater tasks beyond those that can be accomplished by individual peers, yet that are beneficial to all the peers.

While P2P systems had previously been used in many application domains, the architecture was popularized by the file sharing system Napster, originally released in 1999. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is a “Parallel Port” ?

May 10, 2016

Parallel Port – A type of interface found on computers (personal and otherwise) for connecting peripherals. In computing, a parallel port is a parallel communication physical interface. It is also known as a printer port or Centronics port. It was an industry de facto standard for many years, and was finally standardized as IEEE 1284 in the late 1990s, which defined the Enhanced Parallel Port (EPP) and Extended Capability Port (ECP) bi-directional versions. Today, the parallel port interface is seeing decreasing use because of the rise of Universal Serial Bus (USB) devices, along with network printing using Ethernet.

The parallel port interface was originally known as the Parallel Printer Adapter on IBM PC-compatible computers. It was primarily designed to operate a line printer that used IBM’s 8-bit extended ASCII character set to print text, but could also be used to adapt other peripherals. Graphical printers, along with a host of other devices, have been designed to communicate with the system.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is an “Optical Disc Drive” ?

May 3, 2016

Optical Disc Drive (ODD) – In computing, it is an disk drive that uses laser light or electromagnetic waves within or near the visible light spectrum as part of the process of reading or writing data to or from optical discs. Some drives can only read from certain discs, but recent drives can both read and record, also called burners or writers. Compact disks, DVDs, and Blu-ray disks are common types of optical media which can be read and recorded by such drives. Optical disc drives that are no longer in production include CD-ROM drive, CD writer drive, and combo (CD-RW/DVD-ROM) drive. As of 2015, DVD writer drive is the most common for desktop PCs and laptops. There are also DVD-ROM drive, BD-ROM drive, Blu-ray Disc combo (BD-ROM/DVD±RW/CD-RW) drive, and Blu-ray Disc writer drive which are not so much demand in the market.

Optical disc drives are an integral part of standalone appliances such as CD players, VCD players, DVD players, Blu-ray disc players, DVD recorders, certain desktop video game consoles, such as Sony PlayStation 4, Microsoft Xbox One, and Nintendo Wii U, and certain portable video game consoles, such as Sony PlayStation Portable. They are also very commonly used in computers to read software and consumer media distributed on disc, and to record discs for archival and data exchange purposes. Floppy disk drives, with capacity of 1.44 MB, have been made obsolete: optical media are cheap and have vastly higher capacity to handle the large files used since the days of floppy discs, and the vast majority of computers and much consumer entertainment hardware have optical writers. USB flash drives, high-capacity, small, and inexpensive, are suitable where read/write capability is required.

Disc recording is restricted to storing files playable on consumer appliances (films, music, etc.), relatively small volumes of data (e.g., a standard DVD holds 4.7 gigabytes) for local use, and data for distribution, but only on a small scale; mass-producing large numbers of identical discs is cheaper and faster than individual recording.

Optical discs are used to back up relatively small volumes of data, but backing up of entire hard drives, as of 2015, typically containing many hundreds of gigabytes or even multiple terabytes, is less practical than with the smaller capacities previously available. Large backups are often made on external hard drives, as their price has dropped to a level making this viable; in professional environments magnetic tape drives are also used.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is a “Multi-Core Processor” ?

April 26, 2016

Multi-Core Processor – A single computing component with two or more independent actual processing units (called “cores”), which are the units that read and execute program instructions. The instructions are ordinary CPU instructions such as add, move data, and branch, but the multiple cores can run multiple instructions at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.

Processors were originally developed with only one core. In the mid 1980s Rockwell International manufactured versions of the 6502 with two 6502 cores on one chip as the R65C00, R65C21, and R65C29, sharing the chip’s pins on alternate clock phases. Other multi-core processors were developed in the early 2000s by Intel, AMD and others.

Multi-core processors may have two cores (dual-core CPUs, for example, AMD Phenom II X2 and Intel Core Duo), three cores (tri-core CPUs, for example, AMD Phenom II X3), four cores (quad-core CPUs, for example, AMD Phenom II X4, Intel’s i5 and i7 processors), six cores (hexa-core CPUs, for example, AMD Phenom II X6 and Intel Core i7 Extreme Edition 980X), eight cores (octa-core CPUs, for example, Intel Xeon E7-2820 and AMD FX-8350), ten cores (deca-core CPUs, for example, Intel Xeon E7-2850), or more.

A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores, heterogeneous multi-core systems have cores that are not identical. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.

Multi-core processors are widely used across many application domains including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU).

The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can be run in parallel simultaneously on multiple cores; this effect is described by Amdahl’s law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core’s cache(s), avoiding use of much slower main system memory. Most applications, however, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem. The parallelization of software is a significant ongoing topic of research.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is a “Memory Stick” ?

April 19, 2016

Memory Stick – A removable flash memory card format, launched by Sony in October 1998, and is also used in general to describe the whole family of Memory Sticks. In addition to the original Memory Stick, this family includes the Memory Stick PRO, a revision that allows greater maximum storage capacity and faster file transfer speeds; Memory Stick Duo, a small-form-factor version of the Memory Stick (including the PRO Duo); and the even smaller Memory Stick Micro (M2). In December 2006, Sony added the Memory Stick PRO-HG, a high speed variant of the PRO to be used in high-definition video and still cameras. Memory Stick cards can be used in Sony PSP, Sony XDCAM EX camcorders via the MEAD-SD01 adapter. SanDisk and Lexar are among few third-party Memory Stick producers. Kingston offers universal microSD to Memory Stick Pro Duo adapters, but these are unofficial.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


What is a “MACRO” ?

April 12, 2016

MACRO (short for “macroinstruction”, from Greek μακρο- ‘long’) – In computer science it is a rule or pattern that specifies how a certain input sequence (often a sequence of characters) should be mapped to a replacement output sequence (also often a sequence of characters) according to a defined procedure. The mapping process that instantiates (transforms) a macro use into a specific sequence is known as macro expansion. A facility for writing macros may be provided as part of a software application or as a part of a programming language. In the former case, macros are used to make tasks using the application less repetitive. In the latter case, they are a tool that allows a programmer to enable code reuse or even to design domain-specific languages.

Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. (Thus, they are called “macros” because a big block of code can be expanded from a small sequence of characters.) Macros often allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors. The term derives from “macro instruction”, and such expansions were originally used in generating assembly language code.

——————————————————————

Return To The Main Features Page

——————————————————————

Source: Wikipedia


Follow

Get every new post delivered to your Inbox.

Join 177 other followers

%d bloggers like this: