Monday, March 30, 2009

Disable Indexing Services

Indexing Services is a small little program that uses large amounts of RAM and can often make a computer endlessly loud and noisy. This system process indexes and updates lists of all the files that are on your computer. It does this so that when you do a search for something on your computer, it will search faster by scanning the index lists. If you don't search your computer often, or even if you do search often, this system service is completely unnecessary. To disable do the following:

1. Go to Start
2. Click Settings
3. Click Control Panel
4. Double-click Add/Remove Programs
5. Click the Add/Remove Window Components
6. Uncheck the Indexing services
7. Click Next

Monday, March 23, 2009

Simple Mail Transfer Protocol

SMTP is a relatively simple, text-based protocol, in which one or more recipients of a message are specified (and in most cases verified to exist) along with the message text and possibly other encoded objects. The message is then transferred to a remote server using a series of queries and responses between the client and server. Either an end-user's e-mail client, a.k.a. MUA (Mail User Agent), or a relaying server's MTA (Mail Transport Agent) can act as an SMTP client.

An e-mail client knows the outgoing mail SMTP server from its configuration. A relaying server typically determines which SMTP server to connect to by looking up the MX (Mail eXchange) DNS record for each recipient's domain name. Conformant MTAs (not all) fall back to a simple A record in the case of no MX (relaying servers can also be configured to use a smart host). The SMTP client initiates a TCP connection to server's port 25 (unless overridden by configuration). It is quite easy to test an SMTP server using the netcat program (see below).

SMTP is a "push" protocol that cannot "pull" messages from a remote server on demand. To retrieve messages only on demand, which is the most common requirement on a single-user computer, a mail client must use POP3 or IMAP. Another SMTP server can trigger a delivery in SMTP using ETRN. It is possible to receive mail by running an SMTP server. POP3 became popular when single-user computers connected to the Internet only intermittently; SMTP is more suitable for a machine permanently connected to the Internet.

Monday, March 16, 2009

HTTP-Hypertext Transfer Protocol

Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. Its use for retrieving inter-linked resources led to the establishment of the World Wide Web.

HTTP development was coordinated by the World Wide Web Consortium and the Internet Engineering Task Force (IETF), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use.

HTTP is a request/response standard between a client and a server. A client is the end-user, the server is the web site. The client making a HTTP request—using a web browser, spider, or other end-user tool—is referred to as the user agent. The responding server—which stores or creates resources such as HTML files and images—is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used."

Monday, March 09, 2009

Social network service

A social network service focuses on building online communities of people who share interests and/or activities, or who are interested in exploring the interests and activities of others. Most social network services are web based and provide a variety of ways for users to interact, such as e-mail and instant messaging services.

Social networking has created new ways to communicate and share information. Social networking websites are being used regularly by millions of people, and it now seems that social networking will be an enduring part of everyday life. The main types of social networking services are those which contain directories of some categories (such as former classmates), means to connect with friends (usually with self-description pages), and recommender systems linked to trust. Popular methods now combine many of these, with My Space and Face book being the most widely used in North America; Nexopia (mostly in Canada); Bebo, Face book, Hi5, My Space, Tagged, Xing; and Sky rock in parts of Europe; Orkut, Face book and Hi5 in South America and Central America; and Friendster, Orkut, Xiaonei and Cyworld in Asia and the Pacific Islands.

There have been some attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see the FOAF standard and the Open Source Initiative), but this has led to some concerns about privacy.

Monday, March 02, 2009

The user-agent string

A user agent is the client application used with a particular network protocol; the phrase is most commonly used in reference to those which access the World Wide Web, but other systems such as SIP use the term user agent to refer to the user's phone. Web user agents range from web browsers and e-mail clients to search engine crawlers ("spiders"), as well as mobile phones, screen readers and Braille browsers used by people with disabilities. When Internet users visit a web site, a text string is generally sent to identify the user agent to the server. This forms part of the HTTP request, prefixed with User-Agent: (case does not matter) and typically includes information such as the application name, version, host operating system, and language. Bots, such as web crawlers, often also include a URL and/or e-mail address so that the webmaster can contact the operator of the bot.

The user-agent string is one of the criteria by which crawlers can be excluded from certain pages or parts of a website using the "Robots Exclusion Standard" (robots.txt). This allows webmasters who feel that certain parts of their website should not be included in the data gathered by a particular crawler, or that a particular crawler is using up too much bandwidth, to request that crawler not to visit those pages.

The term user agent sniffing refers to the practice of websites showing different content when viewed with a certain user agent. On the Internet, this will result in a different site being shown when browsing the page with a specific browser (e.g. Microsoft Internet Explorer). An infamous example of this is Microsoft Exchange Server 2003's Outlook Web Access feature. When viewed with IE, much more functionality is displayed compared to the same page in any other browser. User agent sniffing is mostly considered poor practice for Web 2.0 web sites, since it encourages browser specific design. Many webmasters are recommended to create an HTML markup that is as standardized as possible, to allow correct rendering in as many browsers as possible.