Monday, December 29, 2008

E-mail

Electronic mail, often abbreviated to e-mail, email or eMail, is any method of creating, transmitting, or storing primarily text-based human communications with digital communications systems. Historically, a variety of electronic mail system designs evolved that were often incompatible or not interoperable. With the proliferation of the Internet since the early 1980s, however, the standardization efforts of Internet architects succeeded in promulgating a single standard based on the Simple Mail Transfer Protocol (SMTP), first published as Internet Standard 10 (RFC 821) in 1982.
Modern e-mail systems are based on a store-and-forward model in which e-mail computer server systems, accept, forward, or store messages on behalf of users, who only connect to the e-mail infrastructure with their personal computer or other network-enabled device for the duration of message transmission or retrieval to or from their designated server. Rarely is e-mail transmitted directly from one user's device to another's.

Monday, December 22, 2008

Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols used for the Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today's IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s, together with the invention of the World Wide Web by Tim Berners-Lee in 1989 (and which exploded with the availability of the first popular web browser: Mosaic).

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted

Monday, December 15, 2008

Internet Governance Forum

The Internet Governance Forum (IGF) is a multi-stakeholder forum for policy dialogue on issues of Internet governance[1]. The establishment of the IGF was formally announced by the United Nations Secretary-General in July 2006 and it was first convened in October / November.

Flash memory is non-volatile, which means that no power is needed to maintain the information stored in the chip. In addition, flash memory offers fast read access times (although not as fast as volatile DRAM memory used for main memory in PCs) and better kinetic shock resistance than hard disks. These characteristics explain the popularity of flash memory in portable devices. Another feature of flash memory is that when packaged in a "memory card," it is enormously durable, being able to withstand intense pressure, extremes of temperature, and even immersion in water.

Friday, December 12, 2008

Broadband Internet access

Broadband Internet access, often shortened to just broadband, is high data rate Internet access—typically contrasted with dial-up access over a modem.

Dial-up modems are generally only capable of a maximum bitrate of 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply at least double this bandwidth and generally without disrupting telephone use.

Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services each year.

Monday, December 08, 2008

ICANN

The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet Protocol (IP) addresses, and protocol port and parameter numbers. A globally unified namespace (i.e., a system of names in which there is at most one holder for each possible name) is essential for the Internet to function. ICANN is headquartered in Marina Del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities.

The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system. Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet has no governing body. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet's systems of domain names, IP addresses, protocol ports and parameter numbers.

Friday, November 07, 2008

Ozone

Ozone is a gas. It can be good or bad, depending on where it is. "Good" ozone occurs naturally about 10 to 30 miles above the Earth's surface. It shields us from the sun's ultraviolet rays. Part of the good ozone layer is gone - destroyed by man-made chemicals. Without enough good ozone, people may get too much ultraviolet radiation. This may increase the risk of skin cancer, cataracts and immune system problems.

"Bad" ozone is at ground level. It forms when pollutants from cars, factories and other sources react chemically with sunlight. It is the main ingredient in smog. It is usually worst in the summer. Breathing bad ozone can be harmful, causing coughing, throat irritation, worsening of asthma, bronchitis and emphysema, and even permanent lung damage, if you are regularly exposed to it.

Monday, October 20, 2008

Internet structure

There have been many analyses of the Internet and its arrangement. For example, it has been resolute that the Internet IP routing structure and hypertext links of the World Wide Web are example of scale-free network.

Similar to the way the profitable Internet providers connect via Internet swap points, research networks tend to intersect into large sub networks such as the next:

* GEANT

* GLORIAD

* The Internet2 Network

* JANET

These in turn are built around relatively smaller network. See also the list of school computer network organizations.

In computer network diagrams, the Internet is often representing by a cloud symbol, into and out of which network transportation can pass.

Monday, October 13, 2008

Internet protocols

The complex communications infrastructure of the Internet consists of its hardware mechanism and a system of software layers that control various aspects of the building. While the hardware can often be used to support other software system, it is the design and the rigorous standardization procedure of the software building that characterizes the Internet.

The responsibility for the architectural design of the Internet software systems has been delegate to the Internet Engineering Task Force. The IETF conduct standard-setting work groups open to any person, about the various aspects of Internet building. Resulting discussions and final standards are available in Request for Comments, freely obtainable on the IETF web site.

The principal methods of network that enable the Internet are restricted in a series of RFCs that constitute the Internet Standards. These standards explain a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered scheme of protocols. The layers correspond to the setting or scope in which their services operate. At the top is the space of the software application, e.g., a web browser request, and just below it is the Transport Layer which connects applications on different hosts via the network. The underlying network consists of two layers: the Internet Layer which enable computers to connect to one-another via intermediate network and thus is the layer that establish internetworking and the Internet, and lastly, at the bottom, is a software layer that provide connectivity between hosts on the same local link, e.g., a local area network or a dial-up link. This model is also known as the TCP/IP model of network. While other models have been urbanized, such as the Open Systems Interconnection model, they are not like-minded in the details of description, nor completion.

Sunday, October 05, 2008

University students' appreciation and contributions for the Internet

New findings in the field of infrastructure during the 1960s, 1970s and 1980s were quickly adopted by universities across North America.


Examples of early university Internet community are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia. Students took up the opportunity of free transportation and saw this new phenomenon as a tool of liberation. Personal computer and the Internet would free them from corporations and governments.


Graduate students played a huge part in the formation of ARPANET. In the 1960s, the network functioning group, which did most of the design for ARPANET's protocols, was collected mainly of graduate student.

Sunday, September 28, 2008

Growth of Internet

Although the basic application and guidelines that make the Internet probable had existed for almost a decade, the system did not gain a public face until the 1990s. On August 6, 1991, CERN, which straddle the border among France and Switzerland, revealed the new World Wide Web project. The Web was imaginary by English scientist Tim Berners-Lee in 1989.

An early accepted web browser was ViolaWWW, patterned after HyperCard and built using the X Window scheme. It was eventually replaced in fame by the Mosaic web browser. In 1993, the National Center for Supercomputing application at the University of Illinois free version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously educational, technical Internet. By 1996 usage of the word Internet had become ordinary, and consequently, so had its use as a synecdoche in orientation to the World Wide Web.

Meanwhile, over the course of the decade, the Internet productively accommodated the majority of previously existing public computer networks. During the 1990s, it was predictable that the Internet grew by 100% per year, with a brief period of volatile growth in 1996 and 1997. This growth is often attributed to the lack of central management, which allows organic increase of the network, as well as the non-proprietary open scenery of the Internet protocol, which encourages seller interoperability and prevents any one company from exert too much manage over the network.

Sunday, September 21, 2008

Terminology

The terms "Internet" and "World Wide Web" are often use in every-day speech with no much division. However, the Internet and the World Wide Web are not one and the similar. The Internet is a global data communications scheme. It is a hardware and software infrastructure that provide connectivity between computers. In difference, the Web is one of the services communicate via the Internet. It is a collection of unified documents and other capital, linked by hyperlinks and URLs.