Bioinformatics: P2P networks have also begun to attract attention from scientists in other disciplines, especially those that deal with large datasets such as bioinformatics. P2P networks can be used to run large programs designed to carry out tests to identify drug candidates. The first such program was begun in 2001 the Centre for Computational Drug Discovery at Oxford University in cooperation with the National Foundation for Cancer Research. There are now several similar programs running under the auspices of the United Devices Cancer Research Project. On a smaller scale, a self-administered program for computational biologists to run and compare various bioinformatics software is available from Chinook. Tranche is an open-source set of software tools for setting up and administrating a decentralized network. It was developed to solve the bioinformatics data sharing problem in a secure and scalable fashion.
Academic Search engine: The sciencenet P2P search engine provides a free and open search engine for scientific knowledge. sciencenet is based on yacy technology. Universities / research institutes can download the free java software and contribute with their own peer(s) to the global network. Liebel-Lab @ Karlsruhe institute of technology KIT.
Education and Academia: Due to the fast distribution and large storage space features, many organizations are trying to apply P2P networks for educational and academic purposes. For instance, Pennsylvania State University, MIT and Simon Fraser University are carrying on a project called LionShare designed for facilitating file sharing among educational institutions globally.
Military: The U.S. Department of Defense has already started research on P2P networks as part of its modern network warfare strategy. In November, 2001, Colonel Robert Wardell from the Pentagon told a group of P2P software engineers at a tech conference in Washington, DC: "You have to empower the fringes if you are going to... be able to make decisions faster than the bad guy".[8] Wardell indicated he was looking for P2P experts to join his engineering effort. In May, 2003 Dr. Tether. Director of Defense Advanced Research Project Agency testified that U.S. Military is using P2P networks. Due to security reasons, details are kept classified.
Business: P2P networks have already been used in business areas, but it is still in the beginning stages. Currently, Kato et al’s studies indicate over 200 companies with approximately $400 million USD are investing in P2P network. Besides File Sharing, companies are also interested in Distributing Computing, Content Distribution, e-marketplace, Distributed Search engines, Groupware and Office Automation via P2P networks. There are several reasons why companies prefer P2P sometimes, such as: Real-time collaboration--a server cannot scale well with increasing volume of content; a process which requires strong computing power; a process which needs high-speed communications, etc. At the same time, P2P is not fully used as it still faces a lot of security issues.
TV: Quite a few applications available to delivery TV content over a P2P network (P2PTV)
Telecommunication: Nowadays, people are not just satisfied with “can hear a person from another side of the earth”, instead, the demands of clearer voice in real-time are increasing globally. Just like the TV network, there are already cables in place, and it's not very likely for companies to change all the cables. Many of them turn to use the internet, more specifically P2P networks. For instance, Skype, one of the most widely used internet phone applications is using P2P technology. Furthermore, many research organizations are trying to apply P2P networks to cellular networks.
Tuesday 22 July 2008
US legal controversy
In Sony Corp. v. Universal Studios, 464 U.S. 417 (1984), the Supreme Court found that Sony's new product, the Betamax, did not subject Sony to secondary copyright liability because it was capable of substantial non-infringing uses. Decades later, this case became the jumping-off point for all peer-to-peer copyright infringement litigation.
The first peer-to-peer case was A&M Records v. Napster, 239 F.3d 1004 (9th Cir. 2001). In the Napster case, the 9th Circuit considered whether Napster was liable as a secondary infringer. First, the court considered whether Napster was contributorily liable for copyright infringement. To be found contributorily liable, Napster must have engaged in "personal conduct that encourages or assists the infringement." [2] The court found that Napster was contributorily liable for the copyright infringement of its end-users because it "knowingly encourages and assists the infringement of plaintiffs' copyrights."[3] The court goes on to analyze whether Napster was vicariously liable for copyright infringement. The standard applied by the court is whether Napster "has the right and ability to supervise the infringing activity and also has a direct financial interest in such activities."[4] The court found that Napster did receive a financial benefit, and had the right and ability to supervise the activity, meaning that the plaintiffs demonstrated a likelihood of success on the merits of their claim of vicarious infringement.[5] The court denied all of Napster's defenses, including its claim of fair use.
The next major peer-to-peer case was MGM v. Grokster, 514 U.S. 913 (2005). In this case, the Supreme Court found that even if Grokster was capable of substantial non-infringing uses, which the Sony Court found was enough to relieve one of secondary copyright liability, Grokster was still secondarily liable because it induced its users to infringe. [6]
Around the world in 2006, an estimated five billion songs, equating to 38,000 years in music were swapped on peer-to-peer websites, while 509 million were purchased online [7].
The first peer-to-peer case was A&M Records v. Napster, 239 F.3d 1004 (9th Cir. 2001). In the Napster case, the 9th Circuit considered whether Napster was liable as a secondary infringer. First, the court considered whether Napster was contributorily liable for copyright infringement. To be found contributorily liable, Napster must have engaged in "personal conduct that encourages or assists the infringement." [2] The court found that Napster was contributorily liable for the copyright infringement of its end-users because it "knowingly encourages and assists the infringement of plaintiffs' copyrights."[3] The court goes on to analyze whether Napster was vicariously liable for copyright infringement. The standard applied by the court is whether Napster "has the right and ability to supervise the infringing activity and also has a direct financial interest in such activities."[4] The court found that Napster did receive a financial benefit, and had the right and ability to supervise the activity, meaning that the plaintiffs demonstrated a likelihood of success on the merits of their claim of vicarious infringement.[5] The court denied all of Napster's defenses, including its claim of fair use.
The next major peer-to-peer case was MGM v. Grokster, 514 U.S. 913 (2005). In this case, the Supreme Court found that even if Grokster was capable of substantial non-infringing uses, which the Sony Court found was enough to relieve one of secondary copyright liability, Grokster was still secondarily liable because it induced its users to infringe. [6]
Around the world in 2006, an estimated five billion songs, equating to 38,000 years in music were swapped on peer-to-peer websites, while 509 million were purchased online [7].
Unstructured and structured P2P networks
The P2P overlay network consists of all the participating peers as network nodes. There are links between any two nodes that know each other: i.e. if a participating peer knows the location of another peer in the P2P network, then there is a directed edge from the former node to the latter in the overlay network. Based on how the nodes in the overlay network are linked to each other, we can classify the P2P networks as unstructured or structured.
An unstructured P2P network is formed when the overlay links are established arbitrarily. Such networks can be easily constructed as a new peer that wants to join the network can copy existing links of another node and then form its own links over time. In an unstructured P2P network, if a peer wants to find a desired piece of data in the network, the query has to be flooded through the network to find as many peers as possible that share the data. The main disadvantage with such networks is that the queries may not always be resolved. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful. Since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Flooding also causes a high amount of signaling traffic in the network and hence such networks typically have very poor search efficiency. Most of the popular P2P networks such as Gnutella and FastTrack are unstructured.
Structured P2P network employ a globally consistent protocol to ensure that any node can efficiently route a search to some peer that has the desired file, even if the file is extremely rare. Such a guarantee necessitates a more structured pattern of overlay links. By far the most common type of structured P2P network is the distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer, in a way analogous to a traditional hash table's assignment of each key to a particular array slot. Some well known DHTs are Chord, Pastry, Tapestry, CAN, and Tulip. Not a DHT-approach but a structured P2P network is HyperCuP
An unstructured P2P network is formed when the overlay links are established arbitrarily. Such networks can be easily constructed as a new peer that wants to join the network can copy existing links of another node and then form its own links over time. In an unstructured P2P network, if a peer wants to find a desired piece of data in the network, the query has to be flooded through the network to find as many peers as possible that share the data. The main disadvantage with such networks is that the queries may not always be resolved. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful. Since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Flooding also causes a high amount of signaling traffic in the network and hence such networks typically have very poor search efficiency. Most of the popular P2P networks such as Gnutella and FastTrack are unstructured.
Structured P2P network employ a globally consistent protocol to ensure that any node can efficiently route a search to some peer that has the desired file, even if the file is extremely rare. Such a guarantee necessitates a more structured pattern of overlay links. By far the most common type of structured P2P network is the distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer, in a way analogous to a traditional hash table's assignment of each key to a particular array slot. Some well known DHTs are Chord, Pastry, Tapestry, CAN, and Tulip. Not a DHT-approach but a structured P2P network is HyperCuP
P2P networks can be classified by what they can be used for:
An important goal in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. Thus, as nodes arrive and demand on the system increases, the total capacity of the system also increases. This is not true of a client-server architecture with a fixed set of servers, in which adding more clients could mean slower data transfer for all users.
The distributed nature of P2P networks also increases robustness in case of failures by replicating data over multiple peers, and -- in pure P2P systems -- by enabling peers to find the data without relying on a centralized index server. In the latter case, there is no single point of failure in the system.[1]
The distributed nature of P2P networks also increases robustness in case of failures by replicating data over multiple peers, and -- in pure P2P systems -- by enabling peers to find the data without relying on a centralized index server. In the latter case, there is no single point of failure in the system.[1]
Classifications of P2P networks
P2P networks can be classified by what they can be used for:
file sharing
telephony
media streaming (audio, video)
discussion forums
Other classification of P2P networks is according to their degree of centralization.
In 'pure' P2P networks:
Peers act as equals, merging the roles of clients and server
There is no central server managing the network
There is no central router
Some examples of pure P2P application layer networks designed for file sharing are Gnutella and Freenet.
There also exist countless hybrid P2P systems:
Has a central server that keeps information on peers and responds to requests for that information.
Peers are responsible for hosting available resources (as the central server does not have them), for letting the central server know what resources they want to share, and for making its shareable resources available to peers that request it.
Route terminals are used as addresses, which are referenced by a set of indices to obtain an absolute address.
e.g.
Centralized P2P network such as Napster
Decentralized P2P network such as KaZaA
Structured P2P network such as CAN
Unstructured P2P network such as Gnutella
Hybrid P2P network (Centralized and Decentralized) such as JXTA (an open source P2P protocol specification
file sharing
telephony
media streaming (audio, video)
discussion forums
Other classification of P2P networks is according to their degree of centralization.
In 'pure' P2P networks:
Peers act as equals, merging the roles of clients and server
There is no central server managing the network
There is no central router
Some examples of pure P2P application layer networks designed for file sharing are Gnutella and Freenet.
There also exist countless hybrid P2P systems:
Has a central server that keeps information on peers and responds to requests for that information.
Peers are responsible for hosting available resources (as the central server does not have them), for letting the central server know what resources they want to share, and for making its shareable resources available to peers that request it.
Route terminals are used as addresses, which are referenced by a set of indices to obtain an absolute address.
e.g.
Centralized P2P network such as Napster
Decentralized P2P network such as KaZaA
Structured P2P network such as CAN
Unstructured P2P network such as Gnutella
Hybrid P2P network (Centralized and Decentralized) such as JXTA (an open source P2P protocol specification
Peer-to-peer
A peer to peer (or P2P) computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and realtime data, such as telephony traffic, is also passed using P2P technology.
A pure P2P network does not have the notion of clients or servers but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example of a file transfer that is not P2P is an FTP server where the client and server programs are quite distinct, the clients initiate the download/uploads, and the servers react to and satisfy these requests.
The earliest P2P network in widespread use was the Usenet news server system, in which peers communicated with one another to propagate Usenet news articles over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP was used to extend even beyond the Internet. However, the news server system also acted in a client-server form when individual users accessed a local news server to read and post articles. The same consideration applies to SMTP email in the sense that the core email relaying network of Mail transfer agents is a P2P network while the periphery of Mail user agents and their direct connections is client server.
Some networks and channels such as Napster, OpenNAP and IRC server channels use a client-server structure for some tasks (e.g. searching) and a P2P structure for others. Networks such as Gnutella or Freenet use a P2P structure for all purposes, and are sometimes referred to as true P2P networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.
P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated 7 April 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.
The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e. not just computer to computer, but human to human. Yochai Benkler has coined the term "commons-based peer production" to denote collaborative projects such as free software. Associated with peer production are the concept of peer governance (referring to the manner in which peer production projects are managed) and peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative Commons licenses).
A pure P2P network does not have the notion of clients or servers but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example of a file transfer that is not P2P is an FTP server where the client and server programs are quite distinct, the clients initiate the download/uploads, and the servers react to and satisfy these requests.
The earliest P2P network in widespread use was the Usenet news server system, in which peers communicated with one another to propagate Usenet news articles over the entire Usenet network. Particularly in the earlier days of Usenet, UUCP was used to extend even beyond the Internet. However, the news server system also acted in a client-server form when individual users accessed a local news server to read and post articles. The same consideration applies to SMTP email in the sense that the core email relaying network of Mail transfer agents is a P2P network while the periphery of Mail user agents and their direct connections is client server.
Some networks and channels such as Napster, OpenNAP and IRC server channels use a client-server structure for some tasks (e.g. searching) and a P2P structure for others. Networks such as Gnutella or Freenet use a P2P structure for all purposes, and are sometimes referred to as true P2P networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.
P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated 7 April 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.
The concept of P2P is increasingly evolving to an expanded usage as the relational dynamic active in distributed networks, i.e. not just computer to computer, but human to human. Yochai Benkler has coined the term "commons-based peer production" to denote collaborative projects such as free software. Associated with peer production are the concept of peer governance (referring to the manner in which peer production projects are managed) and peer property (referring to the new type of licenses which recognize individual authorship but not exclusive property rights, such as the GNU General Public License and the Creative Commons licenses).
CONTENTS
[edit] X.Org and XFree86
XFree86 originated in 1992 from the X386 server for IBM PC compatibles included with X11R5 in 1991, written by Thomas Roell and Mark W. Snitily and donated to the MIT X Consortium by Snitily Graphics Consulting Services (SGCS). XFree86 evolved over time from just one port of X to the leading and most popular implementation and the de facto steward of X's development.[12]
In May 1999, the Open Group formed X.Org. X.Org supervised the release of versions X11R6.5.1 onward. X development at this time had become moribund[13]; most technical innovation since the X Consortium had dissolved had taken place in the XFree86 project.[14] In 1999, the XFree86 team joined X.Org as an honorary (non-paying) member[15], encouraged by various hardware companies[16] interested in using XFree86 with Linux and in its status as the most popular version of X.
By 2003, while the popularity of Linux (and hence the installed base of X) surged, X.Org remained inactive[17], and active development took place largely within XFree86. However, considerable dissent developed within XFree86. The XFree86 project suffered from a perception of a far too cathedral-like development model; developers could not get CVS commit access[18][19] and vendors had to maintain extensive patch sets.[20] In March 2003 the XFree86 organization expelled Keith Packard, who had joined XFree86 after the end of the original MIT X Consortium, with considerable ill-feeling.[21][22][23]
X.Org and XFree86 began discussing a reorganisation suited to properly nurturing the development of X.[24][25][26] Jim Gettys had been pushing strongly for an open development model since at least 2000.[27] Gettys, Packard and several others began discussing in detail the requirements for the effective governance of X with open development.
Finally, in an echo of the X11R6.4 licensing dispute, XFree86 released version 4.4 in February 2004 under a more restricted license which many projects relying on X found unacceptable.[28] The added clause to the license was based upon the original BSD license's advertising clause, which was viewed by the Free Software Foundation and Debian as incompatible with the GNU General Public License.[29] Other groups saw further restrictions as being against the spirit of the original X (OpenBSD threatening a fork, for example). The license issue, combined with the difficulties in getting changes in, left many feeling the time was ripe for a fork.[30]
[edit] The X.Org Foundation
In early 2004 various people from X.Org and freedesktop.org formed the X.Org Foundation, and the Open Group gave it control of the x.org domain name. This marked a radical change in the governance of X. Whereas the stewards of X since 1988 (including the previous X.Org) had been vendor organizations, the Foundation was led by software developers and used community development based on the bazaar model, which relies on outside involvement. Membership was opened to individuals, with corporate membership being in the form of sponsorship. Several major corporations such as Hewlett-Packard and Sun Microsystems currently support the X.Org Foundation.
The Foundation takes an oversight role over X development: technical decisions are made on their merits by achieving rough consensus among community members. Technical decisions are not made by the board of directors; in this sense, it is strongly modelled on the technically non-interventionist GNOME Foundation. The Foundation does not employ any developers.
The Foundation released X11R6.7, the X.Org Server, in April 2004, based on XFree86 4.4RC2 with X11R6.6 changes merged. Gettys and Packard had taken the last version of XFree86 under the old license and, by making a point of an open development model and retaining GPL compatibility, brought many of the old XFree86 developers on board.[31]
X11R6.8 came out in September 2004. It added significant new features, including preliminary support for translucent windows and other sophisticated visual effects, screen magnifiers and thumbnailers, and facilities to integrate with 3D immersive display systems such as Sun's Project Looking Glass and the Croquet project. External applications called compositing window managers provide policy for the visual appearance.
On December 21, 2005[32] , X.Org released X11R6.9, the monolithic source tree for legacy users, and X11R7.0, the same source code separated into independent modules, each maintainable in separate projects.[33] The Foundation released X11R7.1 on May 22, 2006, about four months after 7.0, with considerable feature improvements.[34]
[edit] Future directions
With the X.Org Foundation and freedesktop.org, the main line of X development has started to progress rapidly once more. The developers intend to release present and future versions as usable finished products, not merely as bases for vendors to build a product upon.
For sufficiently capable combinations of hardware and operating systems, X.Org plans to access the video hardware only via OpenGL and the Direct Rendering Infrastructure (DRI). The DRI first appeared in XFree86 version 4.0 and became standard in X11R6.7 and later.[35] Many operating systems have started to add kernel support for hardware manipulation. This work proceeds incrementally.
[edit] Nomenclature
People in the computer trade commonly shorten the phrase "X Window System" to "X11" or simply to "X". The term "X Windows" (in the manner of "Microsoft Windows") is not officially endorsed, though it has been in common use since early in the history of X and has been used deliberately for literary effect, for example in the UNIX-HATERS Handbook.[36]
[edit] Release history
See also: XFree86#Release history
Version
Release date
Most important changes
X1
June 1984
First use of the name "X"; fundamental changes distinguishing the product from W.
X6
January 1985
First version licensed to a handful of outside companies.
X9
September 1985
Color. First release under MIT License.
X10
late 1985
IBM RT/PC, AT (running DOS), and others
X10R2
January 1986
X10R3
February 1986
First release outside MIT. uwm made standard window manager.
X10R4
December 1986
Last version of X10.
X11
September 15, 1987
First release of the current protocol.
X11R2
February 1988
First X Consortium release.[37]
X11R3
October 25, 1988
XDM
X11R4
December 22, 1989
XDMCP, twm brought in as standard window manager, application improvements, Shape extension, new fonts.
X11R4/X11R5
December 1989
Commodore sells the Amiga 2500/UX (Unix based).[citation needed] It was the first computer sold on the market featuring standard X11 based desktop GUI called Open Look.[citation needed] Running AT&T UNIX System V R4, the system was equipped with 68020 or 68030 CPU accelerator card, SCSI controller card, Texas Instruments TIGA 24bit graphic card capable to show 256 colors on screen, and a three buttons mouse.[citation needed]
X11R5
September 5, 1991
PEX, Xcms (color management), font server, X386, X video extension
X11R6
May 16, 1994
ICCCM v2.0; Inter-Client Exchange; X Session Management; X Synchronization extension; X Image extension; XTEST extension; X Input; X Big Requests; XC-MISC; XFree86 changes.
X11R6.1
March 14, 1996
X Double Buffer extension; X keyboard extension; X Record extension.
X11R6.2X11R6.3 (Broadway)
December 23, 1996
Web functionality, LBX. Last X Consortium release. X11R6.2 is the tag for a subset of X11R6.3 with the only new features over R6.1 being XPrint and the Xlib implementation of vertical writing and user-defined character support.[38]
X11R6.4
March 31, 1998
Xinerama.[39]
X11R6.5
Internal X.org release; not made publicly available.
X11R6.5.1
August 20, 2000
X11R6.6
April 4, 2001
Bug fixes, XFree86 changes.
X11R6.7.0
April 6, 2004
First X.Org Foundation release, incorporating XFree86 4.4rc2. Full end-user distribution. Removal of XIE, PEX and libxml2.[40]
X11R6.8.0
September 8, 2004
Window translucency, XDamage, Distributed Multihead X, XFixes, Composite, XEvIE.
X11R6.8.1
September 17, 2004
Security fix in libxpm.
X11R6.8.2
February 10, 2005
Bug fixes, driver updates.
X11R6.9X11R7.0
December 21, 2005
EXA, major source code refactoring.[citation needed][41] From the same source-code base, the modular autotooled version became 7.0 and the monolithic imake version was frozen at 6.9.
X11R7.1
May 22, 2006
EXA enhancements, KDrive integrated, AIGLX, OS and platform support enhancements.[42]
X11R7.2
February 15, 2007
Removal of LBX and the built-in keyboard driver, X-ACE, XCB, autoconfig improvements, cleanups.[43]
X11R7.3
September 6, 2007
XServer 1.4, Input hotplug, output hotplug (RandR 1.2), DTrace probes, PCI domain support.
XFree86 originated in 1992 from the X386 server for IBM PC compatibles included with X11R5 in 1991, written by Thomas Roell and Mark W. Snitily and donated to the MIT X Consortium by Snitily Graphics Consulting Services (SGCS). XFree86 evolved over time from just one port of X to the leading and most popular implementation and the de facto steward of X's development.[12]
In May 1999, the Open Group formed X.Org. X.Org supervised the release of versions X11R6.5.1 onward. X development at this time had become moribund[13]; most technical innovation since the X Consortium had dissolved had taken place in the XFree86 project.[14] In 1999, the XFree86 team joined X.Org as an honorary (non-paying) member[15], encouraged by various hardware companies[16] interested in using XFree86 with Linux and in its status as the most popular version of X.
By 2003, while the popularity of Linux (and hence the installed base of X) surged, X.Org remained inactive[17], and active development took place largely within XFree86. However, considerable dissent developed within XFree86. The XFree86 project suffered from a perception of a far too cathedral-like development model; developers could not get CVS commit access[18][19] and vendors had to maintain extensive patch sets.[20] In March 2003 the XFree86 organization expelled Keith Packard, who had joined XFree86 after the end of the original MIT X Consortium, with considerable ill-feeling.[21][22][23]
X.Org and XFree86 began discussing a reorganisation suited to properly nurturing the development of X.[24][25][26] Jim Gettys had been pushing strongly for an open development model since at least 2000.[27] Gettys, Packard and several others began discussing in detail the requirements for the effective governance of X with open development.
Finally, in an echo of the X11R6.4 licensing dispute, XFree86 released version 4.4 in February 2004 under a more restricted license which many projects relying on X found unacceptable.[28] The added clause to the license was based upon the original BSD license's advertising clause, which was viewed by the Free Software Foundation and Debian as incompatible with the GNU General Public License.[29] Other groups saw further restrictions as being against the spirit of the original X (OpenBSD threatening a fork, for example). The license issue, combined with the difficulties in getting changes in, left many feeling the time was ripe for a fork.[30]
[edit] The X.Org Foundation
In early 2004 various people from X.Org and freedesktop.org formed the X.Org Foundation, and the Open Group gave it control of the x.org domain name. This marked a radical change in the governance of X. Whereas the stewards of X since 1988 (including the previous X.Org) had been vendor organizations, the Foundation was led by software developers and used community development based on the bazaar model, which relies on outside involvement. Membership was opened to individuals, with corporate membership being in the form of sponsorship. Several major corporations such as Hewlett-Packard and Sun Microsystems currently support the X.Org Foundation.
The Foundation takes an oversight role over X development: technical decisions are made on their merits by achieving rough consensus among community members. Technical decisions are not made by the board of directors; in this sense, it is strongly modelled on the technically non-interventionist GNOME Foundation. The Foundation does not employ any developers.
The Foundation released X11R6.7, the X.Org Server, in April 2004, based on XFree86 4.4RC2 with X11R6.6 changes merged. Gettys and Packard had taken the last version of XFree86 under the old license and, by making a point of an open development model and retaining GPL compatibility, brought many of the old XFree86 developers on board.[31]
X11R6.8 came out in September 2004. It added significant new features, including preliminary support for translucent windows and other sophisticated visual effects, screen magnifiers and thumbnailers, and facilities to integrate with 3D immersive display systems such as Sun's Project Looking Glass and the Croquet project. External applications called compositing window managers provide policy for the visual appearance.
On December 21, 2005[32] , X.Org released X11R6.9, the monolithic source tree for legacy users, and X11R7.0, the same source code separated into independent modules, each maintainable in separate projects.[33] The Foundation released X11R7.1 on May 22, 2006, about four months after 7.0, with considerable feature improvements.[34]
[edit] Future directions
With the X.Org Foundation and freedesktop.org, the main line of X development has started to progress rapidly once more. The developers intend to release present and future versions as usable finished products, not merely as bases for vendors to build a product upon.
For sufficiently capable combinations of hardware and operating systems, X.Org plans to access the video hardware only via OpenGL and the Direct Rendering Infrastructure (DRI). The DRI first appeared in XFree86 version 4.0 and became standard in X11R6.7 and later.[35] Many operating systems have started to add kernel support for hardware manipulation. This work proceeds incrementally.
[edit] Nomenclature
People in the computer trade commonly shorten the phrase "X Window System" to "X11" or simply to "X". The term "X Windows" (in the manner of "Microsoft Windows") is not officially endorsed, though it has been in common use since early in the history of X and has been used deliberately for literary effect, for example in the UNIX-HATERS Handbook.[36]
[edit] Release history
See also: XFree86#Release history
Version
Release date
Most important changes
X1
June 1984
First use of the name "X"; fundamental changes distinguishing the product from W.
X6
January 1985
First version licensed to a handful of outside companies.
X9
September 1985
Color. First release under MIT License.
X10
late 1985
IBM RT/PC, AT (running DOS), and others
X10R2
January 1986
X10R3
February 1986
First release outside MIT. uwm made standard window manager.
X10R4
December 1986
Last version of X10.
X11
September 15, 1987
First release of the current protocol.
X11R2
February 1988
First X Consortium release.[37]
X11R3
October 25, 1988
XDM
X11R4
December 22, 1989
XDMCP, twm brought in as standard window manager, application improvements, Shape extension, new fonts.
X11R4/X11R5
December 1989
Commodore sells the Amiga 2500/UX (Unix based).[citation needed] It was the first computer sold on the market featuring standard X11 based desktop GUI called Open Look.[citation needed] Running AT&T UNIX System V R4, the system was equipped with 68020 or 68030 CPU accelerator card, SCSI controller card, Texas Instruments TIGA 24bit graphic card capable to show 256 colors on screen, and a three buttons mouse.[citation needed]
X11R5
September 5, 1991
PEX, Xcms (color management), font server, X386, X video extension
X11R6
May 16, 1994
ICCCM v2.0; Inter-Client Exchange; X Session Management; X Synchronization extension; X Image extension; XTEST extension; X Input; X Big Requests; XC-MISC; XFree86 changes.
X11R6.1
March 14, 1996
X Double Buffer extension; X keyboard extension; X Record extension.
X11R6.2X11R6.3 (Broadway)
December 23, 1996
Web functionality, LBX. Last X Consortium release. X11R6.2 is the tag for a subset of X11R6.3 with the only new features over R6.1 being XPrint and the Xlib implementation of vertical writing and user-defined character support.[38]
X11R6.4
March 31, 1998
Xinerama.[39]
X11R6.5
Internal X.org release; not made publicly available.
X11R6.5.1
August 20, 2000
X11R6.6
April 4, 2001
Bug fixes, XFree86 changes.
X11R6.7.0
April 6, 2004
First X.Org Foundation release, incorporating XFree86 4.4rc2. Full end-user distribution. Removal of XIE, PEX and libxml2.[40]
X11R6.8.0
September 8, 2004
Window translucency, XDamage, Distributed Multihead X, XFixes, Composite, XEvIE.
X11R6.8.1
September 17, 2004
Security fix in libxpm.
X11R6.8.2
February 10, 2005
Bug fixes, driver updates.
X11R6.9X11R7.0
December 21, 2005
EXA, major source code refactoring.[citation needed][41] From the same source-code base, the modular autotooled version became 7.0 and the monolithic imake version was frozen at 6.9.
X11R7.1
May 22, 2006
EXA enhancements, KDrive integrated, AIGLX, OS and platform support enhancements.[42]
X11R7.2
February 15, 2007
Removal of LBX and the built-in keyboard driver, X-ACE, XCB, autoconfig improvements, cleanups.[43]
X11R7.3
September 6, 2007
XServer 1.4, Input hotplug, output hotplug (RandR 1.2), DTrace probes, PCI domain support.
Subscribe to:
Posts (Atom)