Wednesday, April 13, 2011

Homework for Chapter 8

1.) Most electronic commerce Web sites use a three-tier client/server architecture.  In about 100 words, explain why they do and briefly describe what happens in the third tier of most electronic commerce Web sites.

The third tier typically contains databases and applications which it can use to search for, retrieve, and process information.  It then returns the content created to the web server so that the second tier can format the data to generate dynamic web pages. 

2.) Describe and briefly discuss two important measures of a Web site's performance.

One of the most important measures of a Web site's performance to measure is the number of users the server can handle simultaneously.  An accurate measure can sometimes be difficult to make, since it's dependent on the bandwidth of the Internet connection between server and client and the size of the Web pages being delivered.  The two factors to evaluate when measuring the server's capability are the throughput, which is the number of HTTP requests that a hardware/software configuration can process in a given length of time; and response time, which is the length of time it takes to process a single request. 

3.) In one paragraph, outline the main differences between a typical desktop PC and a computer that would be suitable to use as a Web server for a small web site.

Typically, a web server computer is going to have more memory, a faster processor, and larger hard drives that can access data more quickly than a desktop PC.  They will often have multiple processors.  While some servers are standalone models, most are designed to be stackable or mountable on a standard 19-inch A/V rack.  These higher-capacity computers with multiple memory and data storage units will also tend to be more expensive -- perhaps 2-3 times as much as a regular desktop model.


E2: Using the W3C link checker:

I tried using the link checker on the front page of the wbe site for one of my favorite MMORPG's, Champions Online:

http://www.champions-online.com

From this, I learned something very interesting: there is a file called /robots.txt that you can add to a web page that will (in most cases) keep web robots that are prowling the Internet from visiting your site!  You can learn more about the file (and get a copy of it, should you wish to use it in your own pages) here:

http://www.robotstxt.org/robotstxt.html

It seems like the file would have some uses, but web robots can be coded to specifically ignore the file, so it should not be used to try to safeguard sensitive data or anything like that.  The search also located a link with a malformed URL, and another link that used a method that was not supported by the link checker.


No comments:

Post a Comment