Friday, July 31, 2009

Untitled

You might immediately notice that the code in Listing 3 doesn't look like PHP. That's because most of it isn't. It's standardized output that requires little in the way of dynamic content.

The <feed> element identifies this XML document as an Atom feed. The namespace used to define the elements is provided as an attribute of the <feed> element. You also use the aforementioned xml:lang attribute to specify that this is a document written in English.

The <title> element specifies a title for the overall feed. Likewise, the <subtitle> element specifies a subtitle for the overall feed.

The <link> element specifies the URL of this syndication.php document. The address in the example works in the fictitious world that is described in this article, but in real life it does not. In reality, you can include a link that produces the output of this feed.

The <updated> element produces a timestamp (compliant with the RFC 3339 standard) that tells the consumer of this feed when it was last updated. In this case, since the feed will always be up to date because it retrieves the latest data from the database, you use the current timestamp. And you may notice that there is a little snippet of PHP code in this element. That is a custom-built PHP function that produces a timestamp in RFC 3339 format.

The <author> element defines the author of the overall feed. You'll be using your boss's name as the author because it was his idea.

Finally, the <id> element uniquely identifies the feed in an Internationalized Resource Identifier (IRI) format.

Listing 4 is the main loop that produces each entry in the Atom feed. The vast majority of the work for producing the feed is done here.

Listing 4. The loop

<?php  $i = 0;  while($row = mysql_fetch_array($result))  {  if ($i > 0) {  echo "</entry>";  }    $articleDate = $row['posted'];  $articleDateRfc3339 = date3339(strtotime($articleDate));  echo "<entry>";  echo "<title>";  echo $row['title'];  echo "</title>";  echo "<link type='text/html'   href='http://www.fishinhole.com/reports/report.php?  id=".$row['id']."'/>";  echo "<id>";  echo "tag:fishinhole.com,2008:http:  //www.fishinhole.com/reports/report.php?id=".$row['id'];  echo "</id>";  echo "<updated>";  echo $articleDateRfc3339;  echo "</updated>";  echo "<author>";  echo "<name>";  echo $row['author'];  echo "</name>";  echo "</author>";   echo "<summary>";  echo $row['subtitle'];  echo "</summary>";    $i++;  }			  ?>

Once again, Listing 4 covers quite a bit of ground. First, is the while loop. Basically, this part of the code says, in English, "as long as there are rows in the table that haven't been included in the output yet, keep going." The current row in each iteration is stored in a PHP variable intuitively called $row.

Then the counter ($i) is checked. If the counter is more than 0, then that means this is at least the second iteration. In that case, it is necessary to close the previous iteration's <entry> element.

The next two lines retrieve the article date (from the POSTED column) and convert it to RFC 3339 format using the aforementioned function.

Next, the <entry> element is started. Following that is the <title> element, which is populated from the TITLE column in the current row.

The <link> element is unusual in that it doesn't contain any child text. Instead, the actual link is referenced as an attribute. This is part of the Atom standard. The link simply points the user to the URL where the user can read the entire article. Recall that this feed provides only a synopsis to the user.

The <id> element is similar to the one that was described previously. It uniquely identifies this element in IRI format. And, as before, it is constructed from the relevant URL.

The <updated> element contains the DATETIME value (in RFC 3339 format) from the POSTED column. Recall that the$articleDateRfc3339 variable for this document was populated earlier in this iteration.

Next comes the <author> element. This element, unlike the others (but like the <author> element in the preamble) has child elements. For this article, only one of those children is used: the author's name. The author's name is populated from theAUTHOR column of the current row.

The <summary> element contains the information gleaned from the SUBTITLE column of the current row.

Finally, the loop counter ($i) is incremented, and the loop continues.

That, in a nutshell, is the entire body of code associated with producing an Atom document from the REPORTS table. As you can see, it's not as complicated as it might seem at first.

Also, keep in mind that many elements in the Atom specification are not covered here. You can just as easily add those by following the same patterns I describe in this section of the code. For more information, see Resources.

Test it!

Now comes the fun part: testing!

Rather than retype (or copy and paste) everything you see in the code listings above, you can simply use the PHP file that is included in the Download section. Copy that file to a local directory and make the necessary database changes that I described earlier (user name, password, and host). Then copy it to a PHP file structure that has access to the database.

When you have the PHP file in the correct place, launch your browser and access your file as follows: http://your host/context/syndication.php.

As with any customized solution, you need to change the values in italics to match your specific environment.

As I stated previously, your results will vary depending upon which browser and version you use. Some of the more modern browsers detect that this is an Atom feed and display the results accordingly. Others display it in raw XML format. Still others might produce nothing because the document is not a standard HTML document.

If the browser does not display the raw XML, you can do so simply by right-clicking on the document and selecting View Source. After you do that, you should see something similar to Listing 5.

Listing 5. The output (abbreviated)

  <?xml version='1.0' encoding='iso-8859-1' ?>  <feed xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom">  <title>Fishing Reports</title>  <subtitle>The latest reports from fishinhole.com</subtitle>  <link href="http://www.fishinhole.com/reports" rel="self"/>  <updated>2009-05-03T16:19:54-05:00</updated>  <author>  <name>NameOfYourBoss</name>  <email>nameofyourboss@fishinhole.com</email>  </author>  <id>tag:fishinhole.com,2008:http://www.fishinhole.com/reports</id>  <entry>  <title>Speckled Trout In Old River</title>  <link type='text/html' href='http://www.fishinhole.com/reports/report.php?id=4'/>  <id>tag:fishinhole.com,2008:http://www.fishinhole.com/reports/report.php?id=4</id>  <updated>2009-05-03T04:59:00-05:00</updated>  <author>  <name>ReelHooked</name>  </author>  <summary>Limited out by noon</summary>  </entry>  ...  </feed>

Another way to test it is to verify that the feed is valid. You can do that using one of the many Atom feed validators you can find in cyberspace. A good one to use is http://www.feedvalidator.org. That Web site validates feeds in Atom, RSS, and Keyhole Markup Language (KML) formats.

Business Results

Because you implement and deploy your Atom feed, thousands of new enthusiastic sport fishermen from around the world now have exposure to the fishing reports on your Web site. You are getting hundreds of incoming links from sport fishing sites that are embedding your Atom feed. Some enthusiastic sport fishermen are even using feed readers to view the reports on a daily basis.

Your boss pops back into your office after looking at the latest traffic reports. He is pleased with the additional visits and reports that unique visitors have increased by 10%. He gives you a thumbs up, slurps his coffee, and walks away.

Conclusion

The Atom specification is an ideal means of syndicating your Web content. Using PHP with MySQL, you can easily produce a Web feed that complies with the Atom standard and is always up to date because it reads directly from the database. The feed can then be read by a feed reader or embedded in other Web sites. The end result is broader exposure for your Web content, and that means more visitors and, most likely, an increase to your bottom line.

Posted via web from swathidharshananaidu's posterous

@MySpace.com Email Address

Earlier this week, it was reported that MySpace was soon to launch its own email system – MySpace Mail – that would allow users to create their own @myspace.com email address and use the social network as a webmail provider.

That point is now, as MySpace has just started rolling out the beta for their MySpace Mail program. It will come in waves, with all users having access within the next few weeks. Here’s the big question, though: who’s going to switch to MySpace Mail?

Social Email Features

MySpace is hoping that the combination of a MySpace domain and its social networking features will lure some of its millions of users to take the plunge and create a MySpace Mail account. We’ve been given the core feature list, which MySpace describes as follows:

1. New Mail center provides a snapshot of all your mail activities including messages, sent messages, requests, and notifications

2. Send and receive messages from inside or outside the MySpace network

3. Unlimited file storage

4. One click to embed photos directly from your profile or desktop

5. Send and receive file attachments including music and video

6. Search within Mail using our Google Gears implementation

7. Check out friends’ activities in real time via the new Mail Activity Stream module

8. Address book that automatically saves your contacts

Some of these features will interest MySpace users – one-click to embed photos, Google Gears integration, and the activity stream are all good social twists to MySpace email. Still, we ask the big question: who’s going to actually use one as opposed to their current Gmail, Yahoo Mail, Hotmail, or other email account? Let us know in the comments.

Posted via web from swathidharshananaidu's posterous

We Can Change Change, don't get exicted its not America, but hope went @Twitter

Posted via web from swathidharshananaidu's posterous

IBM---> Dive deeper into cloud computing through WebSphere

IBM30 July 2009 | Volume 10, Issue 29 
developerWorks Weekly Edition
 

Welcome, developers!
 

Prepare your Linux system for the future of disk storage. Dive deeper into cloud computing through WebSphere. Put thousands of UNIX commands at your fingertips with the man reference system. This week,developerWorks shows you how to do all of this and more. Our top features list makes for some outstanding reading: 


And if you're looking for a more hands-on learning experience, why not join us for developerWorks Live! briefings? These instructor-led training sessions can get you up to speed on the latest technologies, help you move your software projects forward, and show you how to squeeze the most out of your IT investments. Briefings cover a wide range of topics, and we've got them scheduled in locations worldwide. Check our listingsto see if there's one coming to your neighborhood. (If your newsletter profile includes your location information, then you should see events listed for your area in the space below this intro.) 

Can't make it to one of our live events? No worries! Our virtual briefingsare the next best thing -- online training sessions that effectively simulate the classroom setting. By combining voice, video, data, and graphics, virtual briefings provide a structured learning environment where it's easy to interact with your instructors. Keep checking the schedule: If you miss an event, you can always catch the replay. (Now there's an option I could have used in college.... 8am is no time for calculus!) 

Until next time,
John Swanson and the developerWorks editorial team 


(P.S. I'll be on vacation next week, so look for the next issue of this newsletter on 13 August.) 

DEVELOPER RESOURCES

Spotlight

Top 10 tutorials and articles on developerWorks 

Webcast: From credit cards to gift registries -- Connect everything with Smart SOA (12 August) 

My developerWorks: Get to know Suma Shastry, QA lead and dW author 

WebSphere eXtreme Scale V7.0 development AMI now available on Amazon EC2 

Follow us: Get developerWorks updates on Twitter 

Join us for developerWorks Live! briefings

Downloads

Trial: Rational Service Tester for SOA Quality

From alphaWorks: CIM Repository Synchronization for Cloud Computing

Download, try, or buy

Additional Resources

IBM privacy policy 

IBM copyright and trademark information

 
 
 Developer events in your area
developerWorks Live! briefing in Boston: Eclipse -- Empowering the universal platform
Dive into some of the most important, feature-rich projects that the Eclipse community is developing. From multi-language support to plug-in development, Eclipse is capable of far more than just Java development. (12 August 2009, Waltham, MA)
Don't miss out -- register today! >
Workshop in Austin: Get started with IBM software on Amazon Web Services featuring WebSphere sMash and DB2
This workshop shows you how to create an Amazon EC2 account, how to configure Amazon machine instances with preloaded IBM middleware, how Amazon EC2 security works, and much more. (25 August 2009, Austin, TX)
Don't miss out -- register today! >
More
 AIX and UNIX
AIX and UNIX zone | AIX and UNIX tutorials | AIX and UNIX articles | AIX and UNIX forums
Speaking UNIX: Man oh man
UNIX has hundreds if not thousands of commands, and it's impossible to remember every option and nuance. Fortunately, you don't have to: man, UNIX's built-in online reference system, is man's best friend. 
Learn about man's best friend >
 alphaWorks
Update: IBM Performance Simulator for Linux on POWER
If you're a Linux on POWER user, this tool offers you a set of performance models for IBM's POWER processors. The latest update adds an AIX version. 
Download it now >
Update: Performance Analysis Tool for Java
Use this tool to automatically detect Java threads that consume unanticipated large amounts of system resources. Version 2.1 supports PHD-CSV 4.0 file format.
Download it now >
More
 Information Management
Information Mgmt zone | Articles | Tutorials | Reader favorites | Forums | Downloads
Integrate heterogeneous metadata
Explore usage scenarios for integrating metadata from IBM Cognos Business Intelligence and IBM InfoSphere Information Server.
Start integrating >
Run Oracle applications on DB2 9.7 for Linux, UNIX, and Windows
Get a high-level overview of what Oracle compatibility means in DB2 for Linux, UNIX, and Windows with new, out-of-the-box support for Oracle's SQL and PL/SQL dialects.
Enable your Oracle apps for DB2 >
Integrate Cognos products with IBM Support Assistant
Resolve your software issues more efficiently by following these step-by-step instructions for integrating IBM Cognos Diagnostic Tools with the IBM Support Assistant.
Get the assist >
IBM Certification Days: Save 50% on professional certification
Demonstrate your expertise to the community. IBM Certification Day events are taking place at various venues around the globe. Participants receive a 50% discount on all Information Management certification exams.
Check out the event schedule and pre-register >
Virtual tech briefing: Optim Development Studio 101 (20 August)
Confused about Data Studio, Data Studio Developer, and now Optim Development Studio? Get a tour of Optim Development Studio from a product expert and learn how Optim Development Studio extends the capabilities in Rational Application Developer to turbo-charge the development and optimization of data persistence layers. (20 August 2009)
Book your calendar for this free tech briefing >
2009 Customer Innovation Awards: Nomination deadline extended to 7 August
Have you submitted your nomination of an innovative Information Management solution? There's still time to check out the exciting new categories added this year, and nominate by the extended deadline of 7 August.
Submit your entry today >
Plan now to attend IBM Information On Demand 2009 Global Conference
Choose sessions across three dynamic programs: Technical Skill Building, Business Leadership, and Business Partner Development. Don't miss the key global conference for Information Management professionals. (25 - 29 October 2009, Las Vegas, NV, USA)
Register now for the best savings >
Virtual conference: Effective data management for smarter outcomes (19 August)
Hear Gartner VP Donald Feinberg talk about the IT challenges, key trends, and innovations shaping the data management marketplace; listen to a moderated panel of customers and partners who have incorporated innovative technologies; ask questions during the live Q&A session; and much more. (19 August 2009) 
Register now >
Enhance your DB2 skills at IDUG Europe
Take your career and organization to the next level. Join hundreds of your colleagues at IDUG 2009 - Europe, coming to Rome, Italy, 5 - 9 October 2009. This IDUG event is the premier conference dedicated to providing technical education and networking specifically for IBM DB2 professionals.
Register early and save >
 Java technology
Java technology zone | New to Java programming | Forums | Standards | Downloads | Tutorials
Evolutionary architecture and emergent design: Language, expressiveness, and design
In this first of a two-part article, Neal Ford discusses the intersection of expressiveness and patterns, demonstrating these concepts with both idiomatic patterns and formal design patterns.
Better design >
Transaction strategies: The High Performance strategy
Mark Richards wraps up his series with a strategy for high-performance applications. Your application can maintain fast processing times while still supporting some degree of data integrity and consistency -- but you need to be aware of the trade-offs involved. 
Get to know the trade-offs >
 Linux
Linux zone | Articles | Tutorials | Forums | Reader favorites | LPI exam prep
Make the most of large drives with GPT and Linux
With 2TB disks now readily available and larger drives right around the corner, MBR doesn't cut it anymore. It's time for forward-looking Linux users to get familiar with the GUID Partition Table standard.. 
When a lot still isn't enough >
Linux tip: Create a pixel ruler from the command line
Manipulating graphics through shell commands and scripts might seem a little odd, but it's a useful skill for handling repetitive tasks and large batch jobs. Get started by using Bash scripting, shell arithmetic, and ImageMagick to create a pixel ruler graphic. 
Graphics for admins >
 Lotus
Lotus zone | New to Lotus | Articles | Tutorials | Downloads | Forums
Lotus Domino 8.5 server performance, Part 3: Enterprise cluster mail performance
Lotus Domino 8.5 offers features aimed at reducing the total cost of ownership of the Lotus Domino mail server cluster infrastructure in a large enterprise. In this article, see how you can leverage these features incrementally to realize TCO benefits while upgrading to Lotus Domino 8.5. 
Reduce your costs >
Trial: IBM Mashup Center
Download a complimentary trial of IBM Mashup Center software, which provides an easy-to-use business mashup solution, supporting quick assembly of dynamic situational applications.
Download now >
Now available: Lotus Notes widgets for LinkedIn, TripIt
Using these new widgets for Lotus Notes, you can open LinkedIn and TripIt applications via single sign-on right from your familiar Lotus Notes desktop screen. Simplify your professional networking and travel management tasks.
Boost your productivity >
 Rational
tabletabletabletable

Posted via email from swathidharshananaidu's posterous

Load Balancers Are Dead: Time to Focus on Application Delivery

 

When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.

Overview

This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.

Key Findings
  • Enterprises are still focused on load balancing.
  • There is little cooperation between networking and application teams on a holistic approach for application deployment.
  • Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Recommendations
  • Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
  • Enterprises must start building specialized expertise around application delivery.

What You Need to Know
IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.

Analysis
What's the Issue?

Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis.

In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs).

In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure.

Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.

Why Did This Happen?

The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers.

Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade.

Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload.

Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.

Current Obstacles

As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features.

The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.

Why We Need More and Why Should Enterprises Care?

Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models.

These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements.

ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications.

Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure.

What Vendors Provide ADC Solutions Today?

During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market.

Vendors that have more-attractive offerings will have most or all of these attributes:

  • A strong set of advanced platform capabilities
  • Customizable, extensible platforms and solutions
  • A vision focused on application delivery networking
  • Affinity to applications:
    • Needs to be application-fluent (that is, they need to "speak the language")
    • Support organizations need to "talk applications"

What Should Enterprises Do About This?

Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity.

In addition, enterprises should:

  • Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
  • Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
  • Start building application delivery expertise (see "Toolkits: Your Next Key Hires Should Be Application Delivery Architects and Engineers"). This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
  • Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Advanced ADC Features

We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:

  • Application layer proxy, which is often bidirectional
  • Content transformation
  • Selective compression
  • Selective caching of dynamic content
  • HTML or other application protocol optimizations
  • Web application firewall
  • XML validation and transformation
  • Rules and programmatic interfaces

 

Posted via web from swathidharshananaidu's posterous

Load Balancers Are Dead: Time to Focus on Application Delivery

Posted via web from swathidharshananaidu's posterous

Thursday, July 30, 2009

The launch years of today’s most popular websites

How long have today’s most popular websites been around? This is a survey of when today’s top 50 websites began their lives.

What we here at Pingdom wanted to discover when we made this survey was not just how old the most popular sites are, but to see if we could discover any interesting trends based on that, and we think we did.

For the extra curious we’ve also included a table with the individual launch years for all of the top websites at the bottom of the article.

A note about site inclusion/exclusion: We based this chart on the Alexa top 50 sites in the US. We should note here that we filtered out a few sites from the top 50 because we considered them sites that people don’t normally visit. Some ad networks (like doubleclick.com) always end up in artificially high positions due to the way Alexa measures, for example. We tried to focus on websites that people actually use. After the filtering, we ended up with 42 sites (the list is available at the bottom of this article).

A few observations

Although the above chart pretty much speaks for itself, especially with the red trend curve, here are a few observations based on the data we collected.

  • 43% of today’s top sites were started in 1996 or earlier.
  • The three “biggest” launch years, from largest to smallest: 1996, 1995, 2005.
  • The sites launched in 1995, 1996 and 2005 together account for almost 48% of the top sites.
  • Fun fact: The oldest site in the current top 50 is IMDB.com, which launched on the Web in 1992. The youngest is Bing.com, launched this year.

Calculations were based on the filtered number of sites, i.e. 42. (See explanation under the chart for how we came to that number.)

Some thoughts and things to consider

The Web is still young (a teenager in human years), so it’s difficult to draw any long-ranging conclusions from the gathered data, but we can at least make some reasonable assumptions (and pose a few questions) based on it.

  • The peak at 1995-1996 is when the Web really started to take off, so understandably a lot of big properties launched websites back then (including traditional media like the New York Times and CNN).
  • The slump around 2000-2001 is also understandable. That’s when the dot-com bubble burst.
  • Question: Was the time around 2005 an unusually creative and productive (and successful) era on the Web, or is it a matter of the cyclic rise and fall in popularity of websites? Will we in two years’ time see a peak around 2007 instead of 2005 if we perform the same survey, i.e. do most websites “peak” after around four years?
RT @tweetmeme The launch years of today’s most popular websites | Royal Pingdom http://tinyurl.com/nxs8hp

Posted via web from swathidharshananaidu's posterous