Friday, July 24, 2009

Angelfire

Angelfire is an Internet venture offering free space for web sites. Angelfire also offered an online email service but this ceased operation in January 2002. In the past, it was long known for providing advertising-free hosting. (It also offered medical transcription services). The site was bought by Mountain View, California-based WhoWhere, which was itself subsequently purchased by the search engine company Lycos. As Lycos already offered web page hosting with advertising through its acquisition of Tripod.com, Angelfire's offering was modified to also have parity with Tripod, including the addition of an increasing amount of ads but also by offering more disk space.

As of 2008, Angelfire continues to operate separately from Tripod and now includes features such as blog building and a photo gallery builder. It also supports, for paid members only, CGI scripts written in Perl. Until May 2004, Angelfire offered free email (as a cobrand of Mailcity) at the @angelfire.com domain, but this feature has been replaced by webmail for premium users only (through Lycos Domains).

Although the Angelfire and Tripod are very much separate sites, they do share much of the same underlying software, such as the blog application. Lycos brands, including both Angelfire and Tripod, were licensed to a company in the UK, which shut them down. The media announced they were shutting down but neglected to note that they were in no way related to their U.S. counterparts, resulting in a January 2009 posting by Lycos.

Friday, July 10, 2009

Fiber Optical Cable

An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.


Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 metres (1,800 ft).

Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with an electric arc. Special connectors are used to make removable connections.

Wednesday, July 01, 2009

Cable modem


A cable modem is a type of modem that provides bi-directional data communication via radio frequency channels on a cable television (CATV) infrastructure. Cable modems are primarily used to deliver broadband Internet access in the form of cable Internet, taking advantage of the high bandwidth of a cable television network.

They are commonly deployed in Australia, Europe, and North and South America. In the USA alone there were 22.5 million cable modem users during the first quarter of 2005, up from 17.4 million in the first quarter of 2004.

Modem

Modem (from modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from driven diodes to radio.

The most familiar example is a voiceband modem that turns the digital 1s and 0s of a personal computer into sounds that can be transmitted over the telephone lines of Plain Old Telephone Systems (POTS), and once received on the other side, converts those 1s and 0s back into a form used by a USB, Ethernet, serial, or network connection. Modems are generally classified by the amount of data they can send in a given time, normally measured in bits per second, or "bps". They can also be classified by Baud, the number of times the modem changes its signal state per second.

Thursday, June 18, 2009

Streaming media

Many existing radio and television broadcasters provide Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver

. The range of material is much wider, from pornography to highly specialized, technical webcasts. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material on a worldwide basis.


Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound.

YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands, of videos daily.

Thursday, June 04, 2009

Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols used for the Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today's IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s, together with the advent of the World Wide Web in the early 1990s.

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers (RFC 1122).From lowest to highest, these are the Link Layer, the Internet Layer, the Transport Layer, and the Application Layer.

Tuesday, May 12, 2009

internet providers

An Internet service provider (ISP, also called Internet access provider, or IAP) is a company that offers its customers access to the Internet. The ISP connects to its customers using a data transmission technology appropriate for delivering Internet Protocol datagrams, such as dial-up, DSL, cable modem or dedicated high-speed interconnects.ISPs may provide Internet e-mail accounts to users which allow them to communicate with one another by sending and receiving electronic messages through their ISPs' servers. (As part of their e-mail service, ISPs usually offer the user an e-mail client software package, developed either internally or through an outside contract arrangement.) ISPs may provide other services such as remotely storing data files on behalf of their customers, as well as other services unique to each particular ISP.

ISPs employ a range of technologies to enable consumers to connect to their network.For home users and small businesses, the most popular options include dial-up, DSL (typically Asymmetric Digital Subscriber Line, ADSL), broadband wireless, cable modem, fiber to the premises (FTTH), and Integrated Services Digital Network (ISDN) (typically basic rate interface).For customers with more demanding requirements, such as medium-to-large businesses, or other ISPs, DSL (often SHDSL or ADSL), Ethernet, Metro Ethernet, Gigabit Ethernet, Frame Relay, ISDN (BRI or PRI), ATM, satellite Internet access and synchronous optical networking (SONET) are more likely to be used.


Tuesday, May 05, 2009

Semi-Automatic Ground Environment


The Semi-Automatic Ground Environment (SAGE) was an automated control system for tracking and intercepting enemy bomber aircraft used by NORAD from the late 1950s into the 1980s. In later versions, the system could automatically direct aircraft to an interception by sending commands directly to the aircraft's autopilot.

By the time it was fully operational the Soviet bomber threat had been replaced by the Soviet missile threat, for which SAGE was entirely inadequate. Nevertheless, SAGE was tremendously important; it led to huge advances in online systems and interactive computing, real-time computing, and data communications using modems. It is generally considered to be one of the most advanced and successful large computer systems ever developed.

Thursday, April 16, 2009

History of the Internet

Before the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore and articulate principles of networking between separate physical networks, leading to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA.The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s,[1] including ARPANET and the X.25 protocols.

Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this.

This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

History of the Internet

Before the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore and articulate principles of networking between separate physical networks, leading to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA.The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s, including ARPANET and the X.25 protocols.

Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this.

This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

Monday, April 06, 2009

The client-server model


A client is an application or system that accesses a remote service on another computer system, known as a server, by way of a network.The term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These dumb terminals were clients of the time-sharing mainframe computer.

The client-server model is still used today on the Internet, where a user may connect to a service operating on a remote system through the internet protocol suite. Web browsers are clients that connect to web servers and retrieve web pages for display. Most people use e-mail clients to retrieve their e-mail from their internet service provider's mail storage servers.

Monday, March 30, 2009

Disable Indexing Services

Indexing Services is a small little program that uses large amounts of RAM and can often make a computer endlessly loud and noisy. This system process indexes and updates lists of all the files that are on your computer. It does this so that when you do a search for something on your computer, it will search faster by scanning the index lists. If you don't search your computer often, or even if you do search often, this system service is completely unnecessary. To disable do the following:

1. Go to Start
2. Click Settings
3. Click Control Panel
4. Double-click Add/Remove Programs
5. Click the Add/Remove Window Components
6. Uncheck the Indexing services
7. Click Next

Monday, March 23, 2009

Simple Mail Transfer Protocol

SMTP is a relatively simple, text-based protocol, in which one or more recipients of a message are specified (and in most cases verified to exist) along with the message text and possibly other encoded objects. The message is then transferred to a remote server using a series of queries and responses between the client and server. Either an end-user's e-mail client, a.k.a. MUA (Mail User Agent), or a relaying server's MTA (Mail Transport Agent) can act as an SMTP client.

An e-mail client knows the outgoing mail SMTP server from its configuration. A relaying server typically determines which SMTP server to connect to by looking up the MX (Mail eXchange) DNS record for each recipient's domain name. Conformant MTAs (not all) fall back to a simple A record in the case of no MX (relaying servers can also be configured to use a smart host). The SMTP client initiates a TCP connection to server's port 25 (unless overridden by configuration). It is quite easy to test an SMTP server using the netcat program (see below).

SMTP is a "push" protocol that cannot "pull" messages from a remote server on demand. To retrieve messages only on demand, which is the most common requirement on a single-user computer, a mail client must use POP3 or IMAP. Another SMTP server can trigger a delivery in SMTP using ETRN. It is possible to receive mail by running an SMTP server. POP3 became popular when single-user computers connected to the Internet only intermittently; SMTP is more suitable for a machine permanently connected to the Internet.

Monday, March 16, 2009

HTTP-Hypertext Transfer Protocol

Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. Its use for retrieving inter-linked resources led to the establishment of the World Wide Web.

HTTP development was coordinated by the World Wide Web Consortium and the Internet Engineering Task Force (IETF), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use.

HTTP is a request/response standard between a client and a server. A client is the end-user, the server is the web site. The client making a HTTP request—using a web browser, spider, or other end-user tool—is referred to as the user agent. The responding server—which stores or creates resources such as HTML files and images—is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used."

Monday, March 09, 2009

Social network service

A social network service focuses on building online communities of people who share interests and/or activities, or who are interested in exploring the interests and activities of others. Most social network services are web based and provide a variety of ways for users to interact, such as e-mail and instant messaging services.

Social networking has created new ways to communicate and share information. Social networking websites are being used regularly by millions of people, and it now seems that social networking will be an enduring part of everyday life. The main types of social networking services are those which contain directories of some categories (such as former classmates), means to connect with friends (usually with self-description pages), and recommender systems linked to trust. Popular methods now combine many of these, with My Space and Face book being the most widely used in North America; Nexopia (mostly in Canada); Bebo, Face book, Hi5, My Space, Tagged, Xing; and Sky rock in parts of Europe; Orkut, Face book and Hi5 in South America and Central America; and Friendster, Orkut, Xiaonei and Cyworld in Asia and the Pacific Islands.

There have been some attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see the FOAF standard and the Open Source Initiative), but this has led to some concerns about privacy.

Monday, March 02, 2009

The user-agent string

A user agent is the client application used with a particular network protocol; the phrase is most commonly used in reference to those which access the World Wide Web, but other systems such as SIP use the term user agent to refer to the user's phone. Web user agents range from web browsers and e-mail clients to search engine crawlers ("spiders"), as well as mobile phones, screen readers and Braille browsers used by people with disabilities. When Internet users visit a web site, a text string is generally sent to identify the user agent to the server. This forms part of the HTTP request, prefixed with User-Agent: (case does not matter) and typically includes information such as the application name, version, host operating system, and language. Bots, such as web crawlers, often also include a URL and/or e-mail address so that the webmaster can contact the operator of the bot.

The user-agent string is one of the criteria by which crawlers can be excluded from certain pages or parts of a website using the "Robots Exclusion Standard" (robots.txt). This allows webmasters who feel that certain parts of their website should not be included in the data gathered by a particular crawler, or that a particular crawler is using up too much bandwidth, to request that crawler not to visit those pages.

The term user agent sniffing refers to the practice of websites showing different content when viewed with a certain user agent. On the Internet, this will result in a different site being shown when browsing the page with a specific browser (e.g. Microsoft Internet Explorer). An infamous example of this is Microsoft Exchange Server 2003's Outlook Web Access feature. When viewed with IE, much more functionality is displayed compared to the same page in any other browser. User agent sniffing is mostly considered poor practice for Web 2.0 web sites, since it encourages browser specific design. Many webmasters are recommended to create an HTML markup that is as standardized as possible, to allow correct rendering in as many browsers as possible.

Monday, February 23, 2009

Web browser

Short for Web browser, a software application used to locate and display Web pages. The two most popular browsers are Microsoft Internet Explorer and Firefox.

Both of these are graphical browsers, which means that they can display graphics as well as text. In addition, most modern browsers can present multimedia information, including sound and video, though they require plug-ins for some formats.

Monday, February 16, 2009

DEC 3000 AXP

DEC 3000 AXP was the name given to a series of computer workstations and servers, produced from 1992 to around 1995 by Digital Equipment Corporation. The DEC 3000 AXP series formed part of the first generation of computer systems based on the 64-bit Alpha AXP architecture. Supported operating systems for the DEC 3000 AXP series were DEC OSF/1 AXP (later renamed Digital UNIX) and OpenVMS AXP (later renamed OpenVMS).

All DEC 3000 AXP models used the DECchip 21064 (EV4) or DECchip 21064A (EV45) processor and inherited various features from the earlier MIPS architecture-based DECstation models, such as the TURBOchannel bus and the I/O subsystem.

The DEC 3000 AXP series was superseded in late 1994, with workstation models replaced by the AlphaStation line and server models replaced by the AlphaServer line.

Monday, February 09, 2009

Budget

Budget (from French baguette, purse) generally refers to a list of all planned expenses and revenues. It is a plan for saving and spending. [1]A budget is an important concept in microeconomics, which uses a budget line to illustrate the trade-offs between two or more goods. In other terms, a budget is an organizational plan stated in monetary terms.

In summary, the purpose of budgeting is to:

1. Provide a forecast of revenues and expenditures i.e. construct a model of how our business might perform financially speaking if certain strategies, events and plans are carried out.
2. Enable the actual financial operation of the business to be measured against the forecast.

Monday, February 02, 2009

Sputnik


Sputnik (Russian: "Спутник-1" Russian pronunciation: [sputnik], "Satellite-1", ПС-1 (PS-1, i.e. "Простейший Спутник-1", or Elementary Satellite-1)) was the world's first Earth-orbiting artificial satellite. It circled the earth in 96.2 minutes. Launched into a low altitude elliptical orbit by the Soviet Union on October 4, 1957, it was the first in a series of satellites collectively known as the Sputnik program. The unanticipated announcement of Sputnik 1's success precipitated the Sputnik crisis in the United States and ignited the Space Race within the Cold War.

Sputnik helped to identify the upper atmospheric layer's density, through measuring the satellite's orbital changes. It also provided data on radio-signal distribution in the ionosphere. Pressurized nitrogen, in the satellite's body, provided the first opportunity for meteoroid detection. If a meteoroid penetrated the satellite's outer hull, it would be detected by the temperature data sent back to Earth.

Monday, January 26, 2009

IBM VNET

VNET is an international computer networking system deployed in the mid 1970s and still in current, but highly diminished use. It was developed inside IBM, and provided the main email and file-transfer backbone for the company throughout the 1980s and 1990s. Through it, a number of protocols were developed to deliver e-mail amongst time sharing computers over alternate transmission systems.

VNET was first deployed as a private host to host network among CP/67 and VM/370 mainframes beginning before 1975. It was based on RSCS, a virtual machine based communications program. RSCS used synchronous data link protocols, not SNA/SDLC, to support file to file transfer among virtual machine users. The first several nodes included Scientific Centers and Poughkeepsie, New York lab sites.

Monday, January 12, 2009

Broadband Internet access

Broadband Internet access, often shortened to just broadband, is high data rate Internet access—typically contrasted with dial-up access over a modem.

Dial-up modems are generally only capable of a maximum bitrate of 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply at least double this bandwidth and generally without disrupting telephone use.

Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services each year.

Monday, January 05, 2009

Voice over Internet Protocol

Voice over Internet Protocol (VoIP, IPA: /vɔɪp/) is a general term for a family of transmission technologies for delivery of voice communications over IP networks such as the Internet or other packet-switched networks. Other terms frequently encountered and synonymous with VoIP are IP telephony, Internet telephony, voice over broadband (VoBB), broadband telephony, and broadband phone.

VoIP systems usually interface with the traditional public switched telephone network (PSTN) to allow for transparent phone communications worldwide

VoIP systems employ session control protocols to control the set-up and tear-down of calls as well as audio codecs which encode speech allowing transmission over an IP network as digital audio via an audio stream. Codec use is varied between different implementations of VoIP (and often a range of codecs are used); some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs.