Static content delivery on "edge servers" is something that Akamai, LimeLight, Highwinds, Panther Express, and the like have been selling as the core of their content delivery network (CDN) services.  If you read my last post then you're already familiar with the concept of the "edge" and the "origin."  For those who didn't, the "edge" is basically where end users access a SaaS-type web application and the "origin" is where the content you want to accelerate lives. Typically, end users, who are considered as being part of the "last mile," experience greater latency and/or packet loss (resulting in poor end user experience) by going beyond the "edge."  Without an edge delivery service, end users have to go from the "last mile," to the "edge," to the "middle mile (depending on who you ask and where the end user is)," to the "backbone."  Note:  Anyone concerned with end user performance should get transit from a major backbone provider in order to deliver the fastest possible to the most networks.  Static edge delivery will still shave time off of the overall transaction but the performance benefit is greatest when the origin is on a major backbone, or someone like InterNAP. Most of the above-mentioned services work by having a number of servers strategically located in the major edge networks which cache the static content going out to the end users.  If a file is being requested for the first time (from the perspective of the edge server), a request is made back to origin for the file and then transparently sent to the end user.  From that point on, the accessed edge server will cache a copy of whatever files were requested for a configurable period of time (cache TTL). In order to not impact end user experience when requesting a file that the edge doesn't have, the origin request is accelerated through what is usually a proprietary network acceleration algorithm.  In a nutshell, the CDN companies rely on an algorithm faster than BGP in order to appear seamless.  Akamai, for example, utilizes their SureRoute platform as the cornerstone of most of their acceleration services. A typical end user transaction for static content delivered by edge servers looks like the following: - user requests some URL which is accelerated at the edge - the user is directed to the fastest local edge server, as determined by the CDN provider, either via BGP Anycast or DNS redirection - if the edge server in question doesn't have the requested content, the edge server makes a request to origin on an accelerated and/or optimized path - once the edge server has the data requested, it is cached per policy and served to the end user It's worth mentioning before we get any further that hostnames and URLs are what is accelerated, not networks, servers, or anything along those lines. If we assume the user to be in New York and the origin to be in San Francisco, we are able to shave a lot of time off in the transaction.  A request spanning the width of this country, on average, takes something on the order of 200ms.  That's 100ms spent going from NY to SF and another 100ms spent from SF to NY.  While 200ms isn't a big deal for most people, keep in mind this is 200ms for each and every request on the page.  So if you host a website with 50 10k images, end users in NY will have to wait a theoretical maximum of 10 seconds just for network latency.  (Since we're assuming static content here origin server performance is rarely an issue.)  For comparison, routing to the edge of Comcast's residential cable network in SF takes something like 15ms.  For the example above, the same 50 objects will take less than a second (~750ms) to download in an edge delivery scenario.  There is a slightly longer latency for files being requested for the first time but that should only affect one user once, with all subsequent requests happening at accelerated rates. Please do not consider the CDN references above to be an endorsement of any type.  I am only trying to relay information, it is up to you to find the best network to suit your needs.