Website Fiascos & Lessons learned - A Quick Recap

Recent website fiascos led us to pen down a series of blogs focusing on the subject. Here are the four parts Part 1, Part 2 , Part 3 and Part 4. Incase you missed it, continue reading for a quick recap.

 

Surfing the www

 

Here is the story so far: Websites

After a couple of website glitches last September, including the Obamacare website crash, monitoring your website form a user's perspective has become mandatory. Some quick pointers to proactively monitor the performance of your website: 

  • Take one step beyond your QA's effort and test for realistic monitoring scenarios

  • Keep an eye on the response time and content accuracy of your webpages 

  • Get the geographical response time discrepancies fixed 

Beyond websites 

To prevent server glitches like those suffered by the Government websites, you should continuously monitor your datacenter infrastructure.

  • Watch out for the mail servers, databases, VMware resources etc.,

  • Keep track of your monitoring metrics and your resource availability

  • Define your alerting policies well

Apart from monitoring the performance of your websites and datacenters, you should enable deep application code level visibility. Site24x7 allows you to easily visualize end-to-end transactions, starting from URL to SQL query. The custom dashboard view and automatic reporting sytem keeps you posted about the issues in your code.

You may need to monitor your physical, virtual and cloud infrastructure as well from one single console. Sign Up for a FREE Trial with Site24x7 and monitor your severs, applications and Public and Private Cloud, all in one go!

Comments (0)