Tuesday 22 July 2008

Application of P2P network outside computer science

Bioinformatics: P2P networks have also begun to attract attention from scientists in other disciplines, especially those that deal with large datasets such as bioinformatics. P2P networks can be used to run large programs designed to carry out tests to identify drug candidates. The first such program was begun in 2001 the Centre for Computational Drug Discovery at Oxford University in cooperation with the National Foundation for Cancer Research. There are now several similar programs running under the auspices of the United Devices Cancer Research Project. On a smaller scale, a self-administered program for computational biologists to run and compare various bioinformatics software is available from Chinook. Tranche is an open-source set of software tools for setting up and administrating a decentralized network. It was developed to solve the bioinformatics data sharing problem in a secure and scalable fashion.
Academic Search engine: The sciencenet P2P search engine provides a free and open search engine for scientific knowledge. sciencenet is based on yacy technology. Universities / research institutes can download the free java software and contribute with their own peer(s) to the global network. Liebel-Lab @ Karlsruhe institute of technology KIT.
Education and Academia: Due to the fast distribution and large storage space features, many organizations are trying to apply P2P networks for educational and academic purposes. For instance, Pennsylvania State University, MIT and Simon Fraser University are carrying on a project called LionShare designed for facilitating file sharing among educational institutions globally.
Military: The U.S. Department of Defense has already started research on P2P networks as part of its modern network warfare strategy. In November, 2001, Colonel Robert Wardell from the Pentagon told a group of P2P software engineers at a tech conference in Washington, DC: "You have to empower the fringes if you are going to... be able to make decisions faster than the bad guy".[8] Wardell indicated he was looking for P2P experts to join his engineering effort. In May, 2003 Dr. Tether. Director of Defense Advanced Research Project Agency testified that U.S. Military is using P2P networks. Due to security reasons, details are kept classified.
Business: P2P networks have already been used in business areas, but it is still in the beginning stages. Currently, Kato et al’s studies indicate over 200 companies with approximately $400 million USD are investing in P2P network. Besides File Sharing, companies are also interested in Distributing Computing, Content Distribution, e-marketplace, Distributed Search engines, Groupware and Office Automation via P2P networks. There are several reasons why companies prefer P2P sometimes, such as: Real-time collaboration--a server cannot scale well with increasing volume of content; a process which requires strong computing power; a process which needs high-speed communications, etc. At the same time, P2P is not fully used as it still faces a lot of security issues.
TV: Quite a few applications available to delivery TV content over a P2P network (P2PTV)
Telecommunication: Nowadays, people are not just satisfied with “can hear a person from another side of the earth”, instead, the demands of clearer voice in real-time are increasing globally. Just like the TV network, there are already cables in place, and it's not very likely for companies to change all the cables. Many of them turn to use the internet, more specifically P2P networks. For instance, Skype, one of the most widely used internet phone applications is using P2P technology. Furthermore, many research organizations are trying to apply P2P networks to cellular networks.

US legal controversy

In Sony Corp. v. Universal Studios, 464 U.S. 417 (1984), the Supreme Court found that Sony's new product, the Betamax, did not subject Sony to secondary copyright liability because it was capable of substantial non-infringing uses. Decades later, this case became the jumping-off point for all peer-to-peer copyright infringement litigation.
The first peer-to-peer case was A&M Records v. Napster, 239 F.3d 1004 (9th Cir. 2001). In the Napster case, the 9th Circuit considered whether Napster was liable as a secondary infringer. First, the court considered whether Napster was contributorily liable for copyright infringement. To be found contributorily liable, Napster must have engaged in "personal conduct that encourages or assists the infringement." [2] The court found that Napster was contributorily liable for the copyright infringement of its end-users because it "knowingly encourages and assists the infringement of plaintiffs' copyrights."[3] The court goes on to analyze whether Napster was vicariously liable for copyright infringement. The standard applied by the court is whether Napster "has the right and ability to supervise the infringing activity and also has a direct financial interest in such activities."[4] The court found that Napster did receive a financial benefit, and had the right and ability to supervise the activity, meaning that the plaintiffs demonstrated a likelihood of success on the merits of their claim of vicarious infringement.[5] The court denied all of Napster's defenses, including its claim of fair use.
The next major peer-to-peer case was MGM v. Grokster, 514 U.S. 913 (2005). In this case, the Supreme Court found that even if Grokster was capable of substantial non-infringing uses, which the Sony Court found was enough to relieve one of secondary copyright liability, Grokster was still secondarily liable because it induced its users to infringe. [6]
Around the world in 2006, an estimated five billion songs, equating to 38,000 years in music were swapped on peer-to-peer websites, while 509 million were purchased online [7].

Unstructured and structured P2P networks

The P2P overlay network consists of all the participating peers as network nodes. There are links between any two nodes that know each other: i.e. if a participating peer knows the location of another peer in the P2P network, then there is a directed edge from the former node to the latter in the overlay network. Based on how the nodes in the overlay network are linked to each other, we can classify the P2P networks as unstructured or structured.
An unstructured P2P network is formed when the overlay links are established arbitrarily. Such networks can be easily constructed as a new peer that wants to join the network can copy existing links of another node and then form its own links over time. In an unstructured P2P network, if a peer wants to find a desired piece of data in the network, the query has to be flooded through the network to find as many peers as possible that share the data. The main disadvantage with such networks is that the queries may not always be resolved. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful. Since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Flooding also causes a high amount of signaling traffic in the network and hence such networks typically have very poor search efficiency. Most of the popular P2P networks such as Gnutella and FastTrack are unstructured.
Structured P2P network employ a globally consistent protocol to ensure that any node can efficiently route a search to some peer that has the desired file, even if the file is extremely rare. Such a guarantee necessitates a more structured pattern of overlay links. By far the most common type of structured P2P network is the distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer, in a way analogous to a traditional hash table's assignment of each key to a particular array slot. Some well known DHTs are Chord, Pastry, Tapestry, CAN, and Tulip. Not a DHT-approach but a structured P2P network is HyperCuP

P2P networks can be classified by what they can be used for:

An important goal in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. Thus, as nodes arrive and demand on the system increases, the total capacity of the system also increases. This is not true of a client-server architecture with a fixed set of servers, in which adding more clients could mean slower data transfer for all users.
The distributed nature of P2P networks also increases robustness in case of failures by replicating data over multiple peers, and -- in pure P2P systems -- by enabling peers to find the data without relying on a centralized index server. In the latter case, there is no single point of failure in the system.[1]

Classifications of P2P networks

P2P networks can be classified by what they can be used for:
file sharing
telephony
media streaming (audio, video)
discussion forums
Other classification of P2P networks is according to their degree of centralization.
In 'pure' P2P networks:
Peers act as equals, merging the roles of clients and server
There is no central server managing the network
There is no central router
Some examples of pure P2P application layer networks designed for file sharing are Gnutella and Freenet.
There also exist countless hybrid P2P systems:
Has a central server that keeps information on peers and responds to requests for that information.
Peers are responsible for hosting available resources (as the central server does not have them), for letting the central server know what resources they want to share, and for making its shareable resources available to peers that request it.
Route terminals are used as addresses, which are referenced by a set of indices to obtain an absolute address.
e.g.
Centralized P2P network such as Napster
Decentralized P2P network such as KaZaA
Structured P2P network such as CAN
Unstructured P2P network such as Gnutella
Hybrid P2P network (Centralized and Decentralized) such as JXTA (an open source P2P protocol specification

Peer-to-peer

A peer to peer (or P2P) computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and realtime data, such as telephony traffic, is also passed using P2P technology.
A pure P2P network does not have the notion of clients or servers but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example of a file transfer that is not P2P is an FTP server where the client and server programs are quite distinct, the clients initiate the download/uploads, and the servers react to and satisfy these requests.
The earliest P2P network in widespread use was the Usenet news server system, in which peers communicated with one another to propagate Usenet news articles over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP was used to extend even beyond the Internet. However, the news server system also acted in a client-server form when individual users accessed a local news server to read and post articles. The same consideration applies to SMTP email in the sense that the core email relaying network of Mail transfer agents is a P2P network while the periphery of Mail user agents and their direct connections is client server.
Some networks and channels such as Napster, OpenNAP and IRC server channels use a client-server structure for some tasks (e.g. searching) and a P2P structure for others. Networks such as Gnutella or Freenet use a P2P structure for all purposes, and are sometimes referred to as true P2P networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.
P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated 7 April 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.
The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e. not just computer to computer, but human to human. Yochai Benkler has coined the term "commons-based peer production" to denote collaborative projects such as free software. Associated with peer production are the concept of peer governance (referring to the manner in which peer production projects are managed) and peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative Commons licenses).

CONTENTS

[edit] X.Org and XFree86
XFree86 originated in 1992 from the X386 server for IBM PC compatibles included with X11R5 in 1991, written by Thomas Roell and Mark W. Snitily and donated to the MIT X Consortium by Snitily Graphics Consulting Services (SGCS). XFree86 evolved over time from just one port of X to the leading and most popular implementation and the de facto steward of X's development.[12]
In May 1999, the Open Group formed X.Org. X.Org supervised the release of versions X11R6.5.1 onward. X development at this time had become moribund[13]; most technical innovation since the X Consortium had dissolved had taken place in the XFree86 project.[14] In 1999, the XFree86 team joined X.Org as an honorary (non-paying) member[15], encouraged by various hardware companies[16] interested in using XFree86 with Linux and in its status as the most popular version of X.
By 2003, while the popularity of Linux (and hence the installed base of X) surged, X.Org remained inactive[17], and active development took place largely within XFree86. However, considerable dissent developed within XFree86. The XFree86 project suffered from a perception of a far too cathedral-like development model; developers could not get CVS commit access[18][19] and vendors had to maintain extensive patch sets.[20] In March 2003 the XFree86 organization expelled Keith Packard, who had joined XFree86 after the end of the original MIT X Consortium, with considerable ill-feeling.[21][22][23]
X.Org and XFree86 began discussing a reorganisation suited to properly nurturing the development of X.[24][25][26] Jim Gettys had been pushing strongly for an open development model since at least 2000.[27] Gettys, Packard and several others began discussing in detail the requirements for the effective governance of X with open development.
Finally, in an echo of the X11R6.4 licensing dispute, XFree86 released version 4.4 in February 2004 under a more restricted license which many projects relying on X found unacceptable.[28] The added clause to the license was based upon the original BSD license's advertising clause, which was viewed by the Free Software Foundation and Debian as incompatible with the GNU General Public License.[29] Other groups saw further restrictions as being against the spirit of the original X (OpenBSD threatening a fork, for example). The license issue, combined with the difficulties in getting changes in, left many feeling the time was ripe for a fork.[30]

[edit] The X.Org Foundation
In early 2004 various people from X.Org and freedesktop.org formed the X.Org Foundation, and the Open Group gave it control of the x.org domain name. This marked a radical change in the governance of X. Whereas the stewards of X since 1988 (including the previous X.Org) had been vendor organizations, the Foundation was led by software developers and used community development based on the bazaar model, which relies on outside involvement. Membership was opened to individuals, with corporate membership being in the form of sponsorship. Several major corporations such as Hewlett-Packard and Sun Microsystems currently support the X.Org Foundation.
The Foundation takes an oversight role over X development: technical decisions are made on their merits by achieving rough consensus among community members. Technical decisions are not made by the board of directors; in this sense, it is strongly modelled on the technically non-interventionist GNOME Foundation. The Foundation does not employ any developers.
The Foundation released X11R6.7, the X.Org Server, in April 2004, based on XFree86 4.4RC2 with X11R6.6 changes merged. Gettys and Packard had taken the last version of XFree86 under the old license and, by making a point of an open development model and retaining GPL compatibility, brought many of the old XFree86 developers on board.[31]
X11R6.8 came out in September 2004. It added significant new features, including preliminary support for translucent windows and other sophisticated visual effects, screen magnifiers and thumbnailers, and facilities to integrate with 3D immersive display systems such as Sun's Project Looking Glass and the Croquet project. External applications called compositing window managers provide policy for the visual appearance.
On December 21, 2005[32] , X.Org released X11R6.9, the monolithic source tree for legacy users, and X11R7.0, the same source code separated into independent modules, each maintainable in separate projects.[33] The Foundation released X11R7.1 on May 22, 2006, about four months after 7.0, with considerable feature improvements.[34]

[edit] Future directions
With the X.Org Foundation and freedesktop.org, the main line of X development has started to progress rapidly once more. The developers intend to release present and future versions as usable finished products, not merely as bases for vendors to build a product upon.
For sufficiently capable combinations of hardware and operating systems, X.Org plans to access the video hardware only via OpenGL and the Direct Rendering Infrastructure (DRI). The DRI first appeared in XFree86 version 4.0 and became standard in X11R6.7 and later.[35] Many operating systems have started to add kernel support for hardware manipulation. This work proceeds incrementally.

[edit] Nomenclature
People in the computer trade commonly shorten the phrase "X Window System" to "X11" or simply to "X". The term "X Windows" (in the manner of "Microsoft Windows") is not officially endorsed, though it has been in common use since early in the history of X and has been used deliberately for literary effect, for example in the UNIX-HATERS Handbook.[36]

[edit] Release history
See also: XFree86#Release history
Version
Release date
Most important changes
X1
June 1984
First use of the name "X"; fundamental changes distinguishing the product from W.
X6
January 1985
First version licensed to a handful of outside companies.
X9
September 1985
Color. First release under MIT License.
X10
late 1985
IBM RT/PC, AT (running DOS), and others
X10R2
January 1986
X10R3
February 1986
First release outside MIT. uwm made standard window manager.
X10R4
December 1986
Last version of X10.
X11
September 15, 1987
First release of the current protocol.
X11R2
February 1988
First X Consortium release.[37]
X11R3
October 25, 1988
XDM
X11R4
December 22, 1989
XDMCP, twm brought in as standard window manager, application improvements, Shape extension, new fonts.
X11R4/X11R5
December 1989
Commodore sells the Amiga 2500/UX (Unix based).[citation needed] It was the first computer sold on the market featuring standard X11 based desktop GUI called Open Look.[citation needed] Running AT&T UNIX System V R4, the system was equipped with 68020 or 68030 CPU accelerator card, SCSI controller card, Texas Instruments TIGA 24bit graphic card capable to show 256 colors on screen, and a three buttons mouse.[citation needed]
X11R5
September 5, 1991
PEX, Xcms (color management), font server, X386, X video extension
X11R6
May 16, 1994
ICCCM v2.0; Inter-Client Exchange; X Session Management; X Synchronization extension; X Image extension; XTEST extension; X Input; X Big Requests; XC-MISC; XFree86 changes.
X11R6.1
March 14, 1996
X Double Buffer extension; X keyboard extension; X Record extension.
X11R6.2X11R6.3 (Broadway)
December 23, 1996
Web functionality, LBX. Last X Consortium release. X11R6.2 is the tag for a subset of X11R6.3 with the only new features over R6.1 being XPrint and the Xlib implementation of vertical writing and user-defined character support.[38]
X11R6.4
March 31, 1998
Xinerama.[39]
X11R6.5
Internal X.org release; not made publicly available.
X11R6.5.1
August 20, 2000
X11R6.6
April 4, 2001
Bug fixes, XFree86 changes.
X11R6.7.0
April 6, 2004
First X.Org Foundation release, incorporating XFree86 4.4rc2. Full end-user distribution. Removal of XIE, PEX and libxml2.[40]
X11R6.8.0
September 8, 2004
Window translucency, XDamage, Distributed Multihead X, XFixes, Composite, XEvIE.
X11R6.8.1
September 17, 2004
Security fix in libxpm.
X11R6.8.2
February 10, 2005
Bug fixes, driver updates.
X11R6.9X11R7.0
December 21, 2005
EXA, major source code refactoring.[citation needed][41] From the same source-code base, the modular autotooled version became 7.0 and the monolithic imake version was frozen at 6.9.
X11R7.1
May 22, 2006
EXA enhancements, KDrive integrated, AIGLX, OS and platform support enhancements.[42]
X11R7.2
February 15, 2007
Removal of LBX and the built-in keyboard driver, X-ACE, XCB, autoconfig improvements, cleanups.[43]
X11R7.3
September 6, 2007
XServer 1.4, Input hotplug, output hotplug (RandR 1.2), DTrace probes, PCI domain support.
[edit] Predecessors
Several bitmap display systems preceded X. From Xerox came the Alto (1973) and the Star (1981). From Apple came the Lisa (1983) and the Macintosh (1984). The Unix world had the Andrew Project (1982) and Rob Pike's Blit terminal (1982).
X derives its name as a successor to a pre-1983 window system called W (the letter X directly following W in the Latin alphabet). W Window System ran under the V operating system. W used a network protocol supporting terminal and graphics windows, the server maintaining display lists.

An early-1990s style Unix desktop running the X Window System graphical user interface shows many client applications common to the MIT X Consortium's distribution, including the twm window manager, an X Terminal, Xbiff, xload and a graphical manual page browser.

[edit] Origin and early development
The original idea of X emerged at MIT in 1984 as a collaboration between Jim Gettys (of Project Athena) and Bob Scheifler (of the MIT Laboratory for Computer Science). Scheifler needed a usable display environment for debugging the Argus system. Project Athena (a joint project between Digital Equipment Corporation (DEC), MIT and IBM to provide easy access to computing resources for all students) needed a platform-independent graphics system to link together its heterogeneous multiple-vendor systems; the window system then under development in Carnegie Mellon University's Andrew Project did not make licenses available, and no alternatives existed.
The project solved this by creating a protocol that could both run local applications and call on remote resources. In mid-1983 an initial port of W to Unix ran at one-fifth of its speed under V; in May 1984, Scheifler replaced the synchronous protocol of W with an asynchronous protocol and the display lists with immediate mode graphics to make X version 1. X became the first windowing system environment to offer true hardware-independence and vendor-independence.
Scheifler, Gettys and Ron Newman set to work and X progressed rapidly. They released Version 6 in January 1985. DEC, then preparing to release its first Ultrix workstation, judged X the only windowing system likely to become available in time. DEC engineers ported X6 to DEC's QVSS display on MicroVAX.
In the second quarter of 1985 X acquired color support to function in the DEC VAXstation-II/GPX, forming what became version 9.
A group at Brown University ported version 9 to the IBM RT/PC, but problems with reading unaligned data on the RT forced an incompatible protocol change, leading to version 10 in late 1985. By 1986, outside organizations had started asking for X. The release of X10R2 took place in January 1986; that of X10R3 in February 1986. Although MIT had licensed X6 to some outside groups for a fee, it decided at this time to license X10R3 and future versions under what became known as the MIT License, intending to popularize X further and in return, hoping that many more applications would become available. X10R3 became the first version to achieve wide deployment, with both DEC and Hewlett-Packard releasing products based on it. Other groups ported X10 to Apollo and to Sun workstations and even to the IBM PC/AT. Demonstrations of the first commercial application for X (a mechanical computer-aided engineering system from Cognition Inc. that ran on VAXes and displayed on PCs running an X server) took place at the Autofact trade show at that time. The last version of X10, X10R4, appeared in December 1986.
Attempts were made to enable X servers as real-time collaboration devices, much as Virtual Network Computing (VNC) would later allow a desktop to be shared. One such early effort was Philip J. Gust's SharedX tool.
Although X10 offered interesting and powerful functionality, it had become obvious that the X protocol could use a more hardware-neutral redesign before it became too widely deployed; but MIT alone would not have the resources available for such a complete redesign. As it happened, DEC's Western Software Laboratory found itself between projects with an experienced team. Smokey Wallace of DEC WSL and Jim Gettys proposed that DEC WSL build X11 and make it freely available under the same terms as X9 and X10. This process started in May 1986, with the protocol finalized in August. Alpha-testing of the software started in February 1987, beta-testing in May; the release of X11 finally occurred on September 15, 1987.
The X11 protocol design, led by Scheifler, was extensively discussed on open mailing lists on the nascent Internet that were bridged to USENET newsgroups. Gettys moved to California to help lead the X11 development work at WSL from DEC's Systems Research Center, where Phil Karlton and Susan Angebrandt led the X11 sample server design and implementation. X therefore represents one of the first very large-scale distributed free software projects.

[edit] The MIT X Consortium and the X Consortium, Inc.
In 1987, with the success of X11 becoming apparent, MIT wished to relinquish the stewardship of X, but at a June 1987 meeting with nine vendors, the vendors told MIT that they believed in the need for a neutral party to keep X from fragmenting in the marketplace. In January 1988, the MIT X Consortium formed as a non-profit vendor group, with Scheifler as director, to direct the future development of X in a neutral atmosphere inclusive of commercial and educational interests. Jim Fulton joined in January 1988 and Keith Packard in March 1988 as senior developers, with Jim focusing on Xlib, fonts, window managers, and utilities; and Keith re-implementing the server. Donna Converse and Chris D. Peterson joined later that year, focusing on toolkits and widget sets, working closely with Ralph Swick of MIT Project Athena. The MIT X Consortium produced several significant revisions to X11, the first (Release 2 - X11R2) in February 1988.

DECwindows CDE on OpenVMS 7.3-1
In 1993, the X Consortium, Inc. (a non-profit corporation) formed as the successor to the MIT X Consortium. It released X11R6 on May 16, 1994. In 1995 it took over stewardship of the Motif toolkit and of the Common Desktop Environment for Unix systems. The X Consortium dissolved at the end of 1996, producing a final revision, X11R6.3, and a legacy of increasing commercial influence in the development.[8][9]

[edit] The Open Group
In mid-1997 the X Consortium passed stewardship of X to The Open Group, a vendor group formed in early 1996 by the merger of the Open Software Foundation and X/Open.
The Open Group released X11R6.4 in early 1998. Controversially, X11R6.4 departed from the traditional liberal licensing terms, as the Open Group sought to assure funding for X's development.[10] The new terms would have prevented its adoption by many projects (such as XFree86) and even by some commercial vendors. After XFree86 seemed poised to fork, the Open Group relicensed X11R6.4 under the traditional license in September 1998.[11] The Open Group's last release came as X11R6.4 patch 3
[edit] Design
For more details on this topic, see X Window System protocols and architecture.
For more details on this topic, see X Window System core protocol.
X uses a client-server model: an X server communicates with various client programs. The server accepts requests for graphical output (windows) and sends back user input (from keyboard, mouse, or touchscreen). The server may function as:
an application displaying to a window of another display system
a system program controlling the video output of a PC
a dedicated piece of hardware.
This client-server terminology — the user's terminal as the "server", the remote or local applications as the "clients" — often confuses new X users, because the terms appear reversed. But X takes the perspective of the program, rather than that of the end-user or of the hardware: the local X display provides display services to programs, so it acts as a server; any remote program uses these services, thus it acts as a client.

In this example, the X server takes input from a keyboard and mouse and displays to a screen. A web browser and a terminal emulator run on the user's workstation, and a system updater runs on a remote server but is controlled from the user's machine. Note that the remote application runs just as it would locally.
The communication protocol between server and client operates network-transparently: the client and server may run on the same machine or on different ones, possibly with different architectures and operating systems, but they run the same in either case. A client and server can even communicate securely over the Internet by tunneling the connection over an encrypted network session.
An X client itself may contain an X server having display of multiple clients. This is known as "X nesting". Open-source clients such as Xnest and Xephyr support such X nesting.
To start a remote client program displaying to a local server, the user will typically open a terminal window and telnet or ssh to the remote client application or shell and request local display/input service (e.g. export DISPLAY=[user's machine]:0 on a remote machine running bash). The client application or shell then connects to the local server, servicing a display and input session to the local user. Alternatively, the local machine may run a small helper program to connect to a remote machine and start the desired client application there.
Practical examples of remote clients include:
administering a remote machine graphically
running a computationally intensive simulation on a remote Unix machine and displaying the results on a local Windows desktop machine
running graphical software on several machines at once, controlled by a single display, keyboard and mouse.

[edit] Principles
In 1984, Bob Scheifler and Jim Gettys set out the early principles of X:
Do not add new functionality unless an implementor cannot complete a real application without it.
It is as important to decide what a system is not as to decide what it is. Do not serve all the world's needs; rather, make the system extensible so that additional needs can be met in an upwardly compatible fashion.
The only thing worse than generalizing from one example is generalizing from no examples at all.
If a problem is not completely understood, it is probably best to provide no solution at all.
If you can get 90 percent of the desired effect for 10 percent of the work, use the simpler solution. (See also Worse is better.)
Isolate complexity as much as possible.
Provide mechanism rather than policy. In particular, place user interface policy in the clients' hands.
The first principle was modified during the design of X11 to: "Do not add new functionality unless you know of some real application that will require it."
X has largely kept to these principles since. The reference implementation is developed with a view to extension and improvement of the implementation, whilst remaining almost entirely compatible with the original 1987 protocol.

[edit] User interfaces
X deliberately contains no specification as to application user interface, such as buttons, menus, window title bars and so on. Instead, user software – such as window managers, GUI widget toolkits and desktop environments, or application-specific graphical user interfaces – provide/define all such details. As such, it isn't possible to point to a "typical" X interface as at most times several interfaces have been popular among users.
A window manager controls the placement and appearance of application windows. This may have an interface akin to that of Microsoft Windows or of the Macintosh (examples include Metacity in GNOME, KWin in KDE or Xfwm in Xfce) or have radically different controls (such as a tiling window manager). The window manager may be bare-bones (e.g. twm, the basic window manager supplied with X, or evilwm, an extremely light window manager) or offer functionality verging on that of a full desktop environment (e.g. Enlightenment).
Many users use X with a full desktop environment, which includes a window manager, various applications and a consistent interface. GNOME, KDE and Xfce are the most popular desktop environments. The Unix standard environment is the Common Desktop Environment (CDE). The freedesktop.org initiative addresses interoperability between desktops and the components needed for a competitive X desktop.
As X is responsible for keyboard and mouse interaction with graphical desktops, certain keyboard shortcuts have become associated with X. Control-Alt-Backspace typically terminates the currently running X session, while Control-Alt in conjunction with a function key switches to the associated virtual console. Note, however, that this is an implementation detail left to an individual X server and is by no means universal; for example, X server implementations for Windows and Macintosh typically do not provide these shortcuts.

[edit] Implementations
The X.Org reference implementation serves as the canonical implementation of X. Due to liberal licensing, a number of variations, both free and proprietary, have appeared. Commercial UNIX vendors have tended to take the reference implementation and adapt it for their hardware, usually customising it heavily and adding proprietary extensions.

Cygwin/X running rootless on Microsoft Windows XP. The screen shows X applications (xeyes, xclock, xterm) sharing the screen with native Windows applications (Date and Time, Calculator).
Up to 2004, XFree86 provided the most common X variant on free Unix-like systems. XFree86 started as a port of X for 386-compatible PCs and, by the end of the 1990s, had become the greatest source of technical innovation in X and the de facto standard of X development.[2] Since 2004, however, the X.Org reference implementation, a fork of XFree86, has become predominant.
While computer aficionados most often associate X with Unix, X servers also exist natively within other graphical environments. Hewlett-Packard's OpenVMS operating system includes a version of X with CDE, known as DECwindows, as its standard desktop environment. Apple's Mac OS X v10.3 (Panther) and up includes X11.app, based on XFree86 4.3 and X11R6.6, with better Mac OS X integration. Third-party servers under Mac OS 7, 8 and 9 included MacX.
Microsoft Windows does not come with support for X, but many third-party implementations exist, both free software such as Cygwin/X, Xming and WeirdX; and proprietary products such as Xmanager, MKS X/Server, Exceed and X-Win32. They normally serve to control remote X clients.
When another windowing system (such as those of Microsoft Windows or Mac OS) hosts X, the X system generally runs "rootless", meaning

X Window System

In computing, the X Window System (commonly X11 or X) is a windowing system which implements the X display protocol and provides windowing on bitmap displays. It provides the standard toolkit and protocol with which to build graphical user interfaces (GUIs) on most Unix-like operating systems and OpenVMS, and has been ported to many other contemporary general purpose operating systems.
X provides the basic framework, or primitives, for building GUI environments: drawing and moving windows on the screen and interacting with a mouse and/or keyboard. X does not mandate the user interface — individual client programs handle this. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. X is built as an additional application layer on top of the operating system kernel.
Unlike previous display protocols, X was specifically designed to be used over network connections rather than on an integral or attached display device. X features network transparency: the machine where an application program (the client application) runs can differ from the user's local machine (the display server).
X originated at MIT in 1984. The current protocol version, X11, appeared in September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.org Server, available as free software under the MIT License and similar permissive licences.[

Game server

A game server is remotely or locally run server used by game clients to play multiplayer games. Most video games played over the internet operate via a connection to a game server.

Types of game servers
Game servers can be classified as listen servers and dedicated servers. Listen server refers to a situation in which the server typically runs in the same process as the game client, allowing a player to both host and participate in the game. As a side effect, the server is usually terminated when the client is. Listen servers are operated mostly by individuals, often in LAN situations rather than over the internet, and usually with a lower number of players due to the increased processing and bandwidth requirements associated with operating both server and client simultaneously on the same machine.
Dedicated servers are servers which run independently of the client. Such servers may be run by individuals, but are usually run on dedicated hardware located in data centers, providing more bandwidth and dedicated processing power. Dedicated servers are the preferred method of hosting game servers for most PC-based multiplayer games.
Massively multiplayer online games run on dedicated servers usually hosted by the software company that owns the game title, allowing them to control and update content. In many cases they are run on clustered servers to allow for huge environments and large player counts.

Game server hosting
Game server providers (GSPs) are companies that lease dedicated game servers. Gaming clans will often lease one or more servers for their chosen game, with members of the clan contributing to the server rental fees.
Game server providers often offer web based tools to help control and configure the individual game servers and most allow those that rent/lease to modify the games being leased.

Web Feed Server

Web Feed Server, also known as an RSS Server, enables the distribution, management, reading and tracking of internal and external RSS, ATOM and XML web feeds behind an organization's firewall.[1] Using a web feed server, IT administrators can create users and groups and define subscriptions for each. Typically a web feed server manages the synchronization of web feeds between desktop, browser and mobile RSS readers connected to the server. Additionally, it aggregates the company wide RSS feeds, eliminating the need for individual RSS feed aggregators per computer.

Advantages of Web Feed servers
Enterprise Collaboration
Enterprise 2.0 tools such as blogs and wikis are designed to improve enterprise collaboration. These publishing tools eliminate barriers to conventional communications by encouraging high value content sharing and transparency. A web feed server uses RSS subscription feeds to automatically update team members and manage collaboration.[2]
Security
Behind the firewall web feed servers create a secure, scalable organization-wide web feed environment. Internal and external feeds are authenticated to ensure that private data is kept private while securely stored behind the corporate firewall.

Web Feed Server Providers
Attensa Feed Server
NewsGator
EasyByte RSS Server

Print server

A print server, or printer server, is a computer or device to which one or more printers are connected, which can accept print jobs from external client computers connected to the print server over a network. The print server then sends the data to the appropriate printer(s) that it manages.
The term print server can refer to:
A host computer running Windows OS with one or more shared printers. Client computers connect using Microsoft Network Printing protocol.
A computer running some operating system other than Windows, but still implementing the Microsoft Network Printing protocol (typically Samba running on a UNIX or Linux computer).
A computer that implements the LPD service and thus can process print requests from LPD clients.
A dedicated device that connects one or more printers to a LAN. It typically has a single LAN connector, such as an RJ-45 socket, and one or more physical ports (e.g. serial, parallel or USB (Universal Serial Bus)) to provide connections to printers. In essence this dedicated device provides printing protocol conversion from what was sent by client computers to what will be accepted by the printer. Dedicated print server devices may support a variety of printing protocols including LPD/LPR over TCP/IP, NetWare, NetBIOS/NetBEUI over NBF, TCP Port 9100 or RAW printer protocol over TCP/IP, DLC or IPX/SPX. Dedicated server appliances tend to be fairly simple in both configuration and features. However these are available integrated with other devices such as a wireless router, a firewall, or both. [1]
A dedicated device similar to definition 4 above, that also implements Microsoft Networking protocols to appear to Windows client computers as if it were a print server defined in 1 above.

Web server

[edit] Load limits
A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 60,000, by default between 500 and 1,000) per IP address (and IP port) and it can serve only a certain maximum number of requests per second depending on:
its own settings;
the HTTP request type;
content origin (static or dynamic);
the fact that the served content is or is not cached;
the hardware and software limits of the OS where it is working.
When a web server is near to or over its limits, it becomes overloaded and thus unresponsive.

[edit] Overload causes

A daily graph of a web server's load, indicating a spike in the load early in the day.
At any time web servers can be overloaded because of:
Too much legitimate web traffic (i.e. thousands or even millions of clients hitting the web site in a short interval of time. e.g. Slashdot effect);
DDoS (Distributed Denial of Service) attacks;
Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
Internet web robots traffic not filtered / limited on large web sites with very few resources (bandwidth, etc.);
Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
Web servers (computers) partial unavailability, this can happen because of required / urgent maintenance or upgrade, HW or SW failures, back-end (i.e. DB) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptom

The symptoms of an overloaded web server are:
requests are served with (possibly long) delays (from 1 second to a few hundred seconds);
500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned);
TCP connections are refused or reset (interrupted) before any content is sent to clients;
in very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).
Anti-overload techniques
To partially overcome above load limits and to prevent overload, most popular web sites use common techniques like:
managing network traffic, by using:
Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
deploying web cache techniques;
using different domain names to serve different (static and dynamic) content by separate Web servers, i.e.:
http://images.example.com
http://www.example.com
using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different settings;
using many Web servers (programs) per computer, each one bound to its own network card and IP address;
using many Web servers (computers) that are grouped together so that they act or are seen as one big Web server, see also: Load balancer;
adding more hardware resources (i.e. RAM, disks) to each computer;
tuning OS parameters for hardware capabilities and usage;
using more efficient computer programs for web servers, etc.;
using other workarounds, especially if dynamic content is involved.

Historical notes

The world's first web server.
In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear Research) a new project, which had the goal of easing the exchange of information between scientists by using a hypertext system. As a result of the implementation of this project, in 1990 Berners-Lee wrote two programs:
a browser called WorldWideWeb;
the world's first web server, which ran on NeXTSTEP.
Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among lots of different social groups of people, first in scientific organizations, then in universities and finally in industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.
The following years are recent history which has seen an exponential growth of the number of web sites and servers.

Market structure
Given below is a list of top Web server software vendors published in a Netcraft survey in April 2008.
Vendor
Product
Web Sites Hosted
Percent
Apache
Apache
83,206,564
50.22%
Microsoft
IIS
58,540,275
35.33%
Google
GWS
10,075,991
6.08%
Oversee
Oversee
1,926,812
1.16%
lighttpd
lighttpd
1,495,308
0.9%
nginx
nginx
1,018,503
0.61%
Others
-
9,432,775
5.69%
Total
-
165,696,228
100.00%
There are hundreds of different web server programs available, many of which are specialized for very specific purposes, so the fact that a web server is not very popular does not necessarily mean that it has a lot of bugs or poor performance.
See Category:Web server software for a longer list of HTTP server programs.

Advantages of Fax servers

[edit] Advantages over paper fax machines
Users can send and receive faxes without leaving their desks.
Any printable computer file can be faxed, without having to first print the document on paper.
The number of fax lines in an organisation can be reduced, as the server can queue large numbers of faxes and send each when any of a number of lines is free.
Faxing capability can be added easily to computer programs, allowing automatic generation of faxes.
Transmitted faxes are more legible and professional-looking.
There is less clutter of office equipment; incoming faxes can be printed on a standard computer printer.
Printer jams on malfunctioning fax printers may be reprinted without being re-faxed.
Faxing may be monitored and/or recorded, so that users may be allocated quotas or charged fees, or to ensure compliance with data-retention and finacial laws.
Fax Servers can be located centrally in an organisation's data centres providing resilience and Disaster recover facilities to a traditionaly desktop technology.
Incoming junk faxes are not as much of a problem; the server may maintain a blacklist of numbers it will not accept faxes from (or a white list listing all the numbers it will accept calls from), and those that do get through do not waste paper.

[edit] Public fax services
There are many companies (internet fax providers) operating fax servers as a commercial public service. Subscribers can interact with the servers using methods similar to those available for standard fax servers, and would be assigned a dedicated fax number for as long as they maintain their subscription. Fees are normally charged on a flat monthly rate, with a limit on the number of fax pages sent and/or received. Organisations, and individuals in particular, may find this more convenient or cost-effective than operating their own fax systems.

[edit] Integrated fax programs
An integrated fax program is a complete set of faxing software which operates on a single computer which is equipped with a fax-capable modem connected to a telephone line. Its user interfaces may be similar to those used to communicate with fax servers, except that since the entire operation takes place on the user's computer the user may be made more aware of the progress of the transmission. Integrated fax programs are aimed at consumers and small organizations, and may sometimes be bundled with the computer's operating system