Host Benchmarker Blog

Experiences benchmarking top shared hosts

Why does EIG host their servers in Provo, Utah?

If you you’ve been a customer of HostGator, A Small Orange, Arvixe, or any others that have been acquired by EIG (Endurance International Group), then you’ve probably heard of Provo, Utah before. It’s a tiny town of 116k population, and it houses almost all of EIG’s data centers. Your site was most likely migrated there as a result of EIG consolidating servers after company acquisitions, and although they claimed you were being ‘upgraded’, you most likely didn’t notice any performance increase (or perhaps you website got slower or less reliable).

Why Provo, Utah? Short answer: cost. Most good hosting companies choose to locate their data centers around major internet backbones, such as how Amazon does in Ashburn, VA. But EIG seems to not care about performance, as Provo does not appear to be a major international hub.

Here’s why Provo:

  1. Cheap rent. It’s Utah, the land costs less than a larger city such as Los Angeles or New York.
  2. It has cool outside air. Did you know that up to 30% of a data center’s cost is spent on air conditioning? Those servers get warm, and it’s necessary to keep them cool in order to operate effectively. By using the free outside air of Utah, EIG cuts down on their utility costs.
  3. It’s central – okay, I can understand why you’d want to be in the middle of the US, however a city like Dallas that has a major internet drop would be a better choice.

Next time you choose a new host, be sure to ask them where their data centers are located. Good hosts will be in major internet backbone cities. Even better hosts will have multiple locations and allow you to choose.

Why does Amazon host many of their EC2 instances in Ashburn, Virginia?

Ever heard of Ashburn, VA before? I had never, until I started using some of Amazon’s elastic cloud compute (EC2) servers. Why would they pick a little town called Ashburn to house their east coast data center?

Turns out there’s a major internet backbone there. A lot of hosting companies, ISPs, etc strategically place their infrastructure near these international hubs in order to decrease the ‘last mile’. The last mile is the distance between a computer/server and a major network hub, and it usually accounts for most of the network latency. Decreasing the last mile distance is a common practice that has a huge impact on the speed of a connection.

AshburnVA

There are also major international hubs in Los Angeles, Dallas, New York, and Seattle, to name a few. When picking a host, be sure to ask them where their data center is located, and strive for one that’s near a major internet backbone.

Getting to know InMotion Hosting

A few weeks ago I got an email from a manager at InMotion Hosting reaching out to say hi and answer any questions I had about their service. As the man behind HostBenchmarker, I’m always looking for ways to network with hosts and learn more about their servers. Once learning I was less than an hour from their Los Angeles data center, they extended the offer to meet in person, and do an interview for this site – pretty cool of them!

The manager put me in touch with Will Miles, a Data Center Operations and Systems Supervisor who had been with InMotion for over 4 years. We met up for an informal dinner (sandwiches at Panera Bread) during the week, and, in short – geeked out. I picked his brain on what makes their servers more reliable and faster than the others, and he shared some cool technologies they were using (or about to release).

Here are a few takeaways from that conversation…

InMotionWill

What is it like working for InMotion?

Something Will was very proud about was working for InMotion. He’s aware of many other unhappy/incompetent workers at other hosting companies, and is super appreciative to be at InMotion.

Unlike other hosting companies, InMotion is an employee-owned business. The staff is involved in direct bonuses and profit sharing, which gives them incentive to go above and beyond to make their customers happy.

Will said the upper management is very down to earth – they don’t drive fancy cars or have an entitled personality. Instead they reinvest a lot back into the company, always upgrading hardware, etc. This ensures the company is around for many years to come.

Of course he wasn’t going to say anything bad about InMotion while he was wearing a company-issued shirt, but I could tell he truly was proud of where he worked, and enjoyed his job and roles.

How do you ensure your servers are reliable?

After getting to know him and InMotion, I began to ask the questions I was most interested in – the ones that affected the performance of their severs.

At the time of this writing, InMotion came in 4th of 10 in terms of the uptime stats I collect. While this sounds decent, an uptime of 99.95% is still 21 minutes per month of downtime. Some would argue that’s fine with shared servers, but I’d like to see 99.99%

InMotionUptime

While Will agreed that 99.95% could be better, and they’re striving for that, here’s what they’re currently doing to ensure reliable hosting:

  • DDOS mitigation at the ISP level – This means filtering out the bad traffic before if even hits their network/servers. This combats 99.99% of DDOS attacks, but there’s always new threats that pass through
  • Proactive vulnerability mitigation – It amazed me when Will told me that most of the downtime encountered was due to software glitches. Because of this, they have a team actively working to contribute to the server community through patches, upgrades, etc for common software. For example, he said his team had a fix for the Heartbleed vulnerability before a public patch was released (and that public patch incorporated some of their own development). Thanks for fixing the internet, InMotion!
  • 100% SLA coming soon  – One of InMotion’s future goals is to offer a 100% uptime service level agreement. This means if your site is down due to their servers failing, you’ll be reimbursed a portion of your hosting fees. While they don’t have a projected date for this, it is a core initiative they are focusing on every day.

Will also explained that even if everything is working perfectly with their servers and internal network, there are outside network outages that happen all the time that simply cannot be avoided. He pointed me to InternetHealthReport as an example – each row/column is a major carrier connecting to another major carrier, and anything in yellow or red is a problem:

InternetHealthReport

How do you ensure your servers are fast?

Having reliable servers is one thing, but making them fast takes a whole other level of complexity. At the time of this post, InMotion came in tied for 5th out of 10 for server speed.

InMotionSpeed

Here’s what Will and his team are doing to ensure your website loads fast:

  • Constant rebalancing of resources – They don’t put a specific # of accounts per server, instead they’re always monitoring the resource usage of accounts and seamlessly moving them to ensure an equal spread of resource usage. This means if there’s a few account on one server that are in high demand, their system will automatically move them to a server that’s in less demand. One server may host 10 high-traffic accounts, while another has 200 low-traffic accounts. What amazes me is that this is all done transparently behind the scenes – no one is controlling this, and it’s completely seamless – no IP or DNS changes, no downtime.
  • Capping certain resources – While InMotion does offer unlimited disk space (they host a church website that uses terrabytes to stream their video sermons, all on a basic plan), they do cap other resources like bandwidth. I’m a huge fan of this, as there’s no such thing as unlimited resources and I get concerned when I see hosts that advertise this.
  • All SSD storage  – InMotion recently (1.5 years ago) upgraded their entire infrastructure to use SSD hard drives for all storage. Compared to traditional spinning hard drives, solid state drives are up to 50x faster. And since there are no moving parts, they break down less and are more reliable. However nothing’s perfect, and by default everything is protected in RAID, in case any disks do fail.

What would you like us to know about InMotion?

One thing Will was super excited to tell me about was their new High Availability feature on all of the VPS accounts. It’s a free service that’s going to be provided to all new customers (existing customer just need to enable it by asking for it) that ensures your servers are almost always reachable. Without getting too technical, it automatically mirrors your VPS sites across many servers. And when I say mirror, I mean everything – not just your files, but all settings, ip addresses, databases, even memory is identical between all replicated instances of your site. It sounds super impressive!

But they don’t stop there – each of these servers has four 10 gig network cables, ensuring there’s never a shortage of bandwidth. Snapshots are taken at regular intervals, and you can choose to rollback to a snapshot in real time. All of this combined ensure there’s no more than 1-2 seconds of downtime if their systems detect any kind of hardware failure, network stoppage, or software glitch.

I know the focus of HostBenchmarker is shared hosting, but as an advocate for speed and reliability for all websites, I couldn’t not mention this VPS feature.

This High Availability feature is currently on a soft release, planned to be rolled out to the general public in April 2016.

Wrapping up

Overall, it was a great conversation. I expected it to last no more than 30 minutes (how long can you talk about servers to someone?), but after an hour flew by I realized I better get home to my family.

As we wrapped up, Will mentioned that InMotion does offer facility tours. If you’re ever in the downtown LA area and would like to geek out at the hardware in their data centers, feel free to reach out to them. I’ll probably be doing this sometime soon :)

Thanks again to Will for the interview, Sev for connecting us, and InMotion being transparent about all this.

Results from HostBenchmarker are always being updated, be sure to check out the most current report of InMotion’s benchmark stats.

BTW – if any hosts would like to connect with me to do an interview, please reach out – just keep in mind I can’t be bribed to change benchmark stats.

Bluehost breaks record for worst tech support

One of the tests I run at HostBenchmarker is the Support Response Test. It’s a simple test where I send a tech support message, and see how long it takes for them to respond. The top hosts will respond within minutes. But the poor performing ones can take a 1-2 days. Well this month Yahoo Web Hosting broke the record for slowest support response time.

How slow were they to respond? 264 hours. Yup, do the math – that’s over 11 days!

What if my site had been having real problems? It could have been down for what feels like forever.

What’s interesting is that Bluehost actually varies the most out of any host. Take a look at this history for them – some months 0.1 hours (awesome), yet many others way worse:

bluehost_support_poor

Note – above you’ll see we cap the results at 168 hours (7 days), just because I can’t wait around forever to publish test results if a host hasn’t responded. In reality they responded on Mar 21st, 8:39pm.

I should note that they did apologize in their response, saying that they recently had a high amount of tickets and were working as fast as they could to resolve them. So there’s that. But still. Over 10 days. Wow.

If you’re on Bluehost shared web hosting, you better pray you don’t need any help lately. Take a minute to consider some better alternatives.

Top EIG Hosting Companies Tested – Performance Comparison

Endurance International Group, more widely known as ‘EIG’, is a parent company to many known web hosts. Lots of the popular hosting brands you’ve heard of, such as HostGator and Bluehost, are owned by EIG.

Many EIG customers are unhappy with their hosting experience. This is especially true when a hosting company gets bought out by EIG and you have to transition to their data centers. They claim you’re moving to a more secure, stable environment, but many have noticed poor support and increased load times once this transition happens.

Is it true that all EIG hosting companies suck? Let’s take a look at the data.

Here’s a chart of the top 10 hosts being tested here at HostBenchmarker. I’ve marked the EIG-owned hosts for reference:

EIG-hosting-performance

So, do they all have poor uptime, slow page load times, and bad support? You can see that it’s actually not true.

Consider Bluehost, they actually have the top average page speeds in under 3 seconds. And HostGator – decent uptime at 99.97%. Finally, Site5 – they have one of the best support responses at under .3 hours.

However note that none of these hosts score well in all categories, and it is important to have a host that has good reliability, fast speeds, and great customer service.

See more up-to-date performance benchmarks

Hostgator’s New Prices – Raises rates by 20%!

hostgator-new-prices

I recently got the following email from Hostgator, in regards to one of the shared hosting accounts I run tests with:

Hello,

This email is to notify you that the following invoice with HostGator.com has been generated which reflects the new recurring price for the products you have purchased.

In an effort to ensure best in class service and top-notch server performance, this invoice reflects the new increased recurring price for this product:

Invoice ID: 46373669      Amount: $10.95
Product: Shared Hosting – Hatchling

Whoah! Almost $11/month for Hostgator’s cheapest shared plan? That’s about as expensive as Site5 and Dreamhost – which are some of the most expensive cheap shared hosts out there.

When I signed up with Hostgator almost 2 years ago, the price was $8.95, as seen in my detailed review of hostgator’s smallest plan.

However, if you go to their website, it will look like it’s much cheaper than this:

hostgator-signup-price

This is because they’re showing you the 3 year term here, with a one time 20% off discount. Know that the discount will be removed and if you don’t want to commit to 3 years, you’ll be paying much more than that, as seen in their billing table:

hostgator-new-prices

Does HostGator’s performance warrant an increase? After all, they claim this increase is due to ‘top notch server performance’ changes. What do you think by looking at these Hostgator performance stats?

 

Bluehost performance after server upgrade

I got a notice a month ago from Bluehost telling me that they were going to be upgrading the server that one of my test accounts was on. Naturally I got excited since this is a great opportunity to see a before and after of the effect of a server upgrade.

Here’s the email:

Bluehost is dedicated to keeping your account secure and working smoothly. Our commitment to open source software and an intuitive web experience makes your online success our number one priority.

As part of this, your website is being upgraded to a new server; a server that is both faster and more reliable.

Migration of your website to the new server will begin next week. The process requires the server to be rebooted twice; once during the migration and again 48-72 hours after the migration is complete. Please expect 15-20 minutes of downtime between 10 p.m. and 4 a.m. Mountain Time while the server reboots.

We want to assure you that we are doing everything possible to make this a smooth transition.

Bluehost Support

I received that email Sept 16, 2015. I waited ample time for the upgrade to take place. If you look at the uptime stats, you can determine that the upgrade did take place around the 24th, based on the downtime seen:

bluehost-uptime-near-upgrade

And how does page speed measure up after the ‘upgrade’?

bluehost-speed-near-upgrade

Do you see a difference? I don’t. It looks like performance degraded (slower page times) around the beginning of September, before the upgrade happened. Perhaps the upgrade was an attempt to fix this? Doesn’t seem to have helped.

Unfortunately, I’m sadly disappointed in Bluehost‘s ability to upgrade servers for better performance. Check our more of their stats for the details.

Which EC2 instance type should be used for WebPageTest Agents?

Every time I set up a private installation of WebPageTest I struggle with trying to figure out what size instances I should be using for the WPT agents. If you make them too small, you risk having inconsistent or inaccurate data. If you make them too big, you’re wasting resources (and paying more than you need to). I decided to empirically run some tests comparing different Amazon instance sizes so I knew for sure which is the most consistent while still being cost sensitive.

Skip down to the conclusion if you just want my recommendation and don’t care about the details of how I came to it

Patrick Meenan (who runs WebPageTest.org) states that at least medium instance types should be used (“Medium instances are highly recommended for more consistent test results”). In the past I had experiemened with t2.micros and they seemed to do okay, so I decided to test a range of instance types, starting with the t2 tier.

Since WebPageTest is supposed to be mimicking what real users experience, a hunch would be to pick one that has similar configuration to your average desktop computer. These days most computers standard with dual cores, and at least 6 GB memory, so I predicted that would be ideal. However the testing task only requires using a common browser, which shouldn’t be using that many resources. I can see that my current browser window is using less than 100MB of memory, so would the additional GBs of memory even be utilized?

Trying t2.micro, t2.small, t2.medium

I tested by spinning up each instance size one at a time, then sending 4 urls to it to test, and repeating every 5 minutes until I had a couple hours of data. The sites I tested include a personal site of mine, Google, Yahoo, and Amazon. I’m keeping track of the load time (“document complete”) for each run and looking for inconsistencies.

Here are the specs of the EC2 instance sizes I used:
ec2-instance-types-t2

A few notes about my testing process:

  • I was using a Chrome/Firefox/IE11 agent image: wpt-ie11-20140912 (ami-561cb13e), deployed from N. Virginia
  • After the agent spun up, I discarded the first 2 test results, just in case there was any overhead in getting initialized or warmed up.
  • Tests were ran with Cable-simulated connectivity, and the latest version of Chrome (39 at the time).
  • I found previous comments about instance performance being affected by video, so video was enabled for all tests

Here are the results of each instance size, over time.

Micro:

t2-micro

Small:

t2-small

Medium:

t2-medium

By looking at the data grouped by instance type, we’re hoping to see straight horizontal lines. Jagged lines mean the instances were jumping around with their results. It doesn’t appear to me that mediums are any more consistent than micros at this point.

You can also view these per website:

Personal Website:

t2-personal

Google:

t2-google

Yahoo:

t2-yahoo

Amazon:

t2-amazon

By looking at each URL at a time, we’re hoping to see all the data grouped together as close as possible. And for the most part, we do. There is a jump for Micro on the personal site, but since we see sometime similar for Medium on Yahoo, I’m not going to degrade the Micro. Another interesting finding is that you see more variability the larger your site is (eg. Google never strays beyond 1.4 – 1.6, while Amazon ranges from 6 – 10).

Considering all the data I was surprised to see a Micro instance performing about as consistent as a Medium. I got excited at the cost savings I was about to have by downgrading my existing agents (would be several hundred dollars/month!). But then I realized my test was flawed.

Testing Secure Sites

After running my first set of tests I unfortunately came across a comment by someone claiming that HTTPS leads to a bottleneck on the agents. None of my tests were accessing secure sites. Darn.

The reasoning behind this was that doing the extra processing required to encrypt/decrypt SSL traffic will limit your tests. Was it true that we’d be maxing out the processing power on these machines?

I ran a couple tests and monitored CPU usage directly from the task manager. I didn’t like looking at the CPU statistic from getTesters.php since it didn’t seem to be real time.

CPU Usage on Micro:

cpu-usage-micro

CPU Usage on Small:

cpu-usage-small

CPU Usage on Medium:

cpu-usage-med

If we were maxing out the micro and small agents on insecure traffic, there’s no way they’d be able to handle HTTPS.

I repeated my original tests using only secure sites on t2.small and t2.medium sizes.

By instance type, over time:

Secure Sites, Small:

secure-small

Secure Sites, Medium:

secure-med

Remember, by looking at the data grouped by instance type, we’re hoping to see straight horizontal lines. Jagged lines mean the instances were jumping around with their results.

Per secure website:

Google (secure):

secure-google

Yahoo (secure):

secure-yahoo

Personal Site (secure):

secure-personal

Remember, by looking at each URL at a time, we’re hoping to see all the data grouped together as close as possible.

By looking at the graphs you can see a huge performance difference between small and medium with my personal site over HTTPS. There is a huge blip on Yahoo-Medium, but I’m going to disregard that since it was only one. The personal site graphs proves that smalls are being bottlenecked by the secure processing, and you need to use an agent with more power.

From this data I’d recommend that if you’re going to be testing any traffic over HTTPS you should use a medium. However, I was about to be wrong once again.

Can’t use t2’s long term

I had just decided to use t2.medium sized agents and let them run over the weekend. When I got back I realized something dramatic had happened. Here’s a plot of the load time, speed index, and TTFB on a single page over a few days:

t2-failing

All of a sudden tests started taking about 3x as long to load pages. What happened??

I reached out to Patrick to see if he’d seen anything like that, and after going back and forth we realized it was because I was using a t2 tiered instance type. From Amazon:

“T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. Instances in this family are ideal for applications that don’t use the full CPU often or consistently, but occasionally need to burst (e.g. web servers, developer environments, and small databases).”

In other words, t2’s are by definition going to be unreliable since they get bursts of performance periodically. When I started my tests they were using all the allocated bursts, but once they ran out their real performance kicked in. It was obvious that t2’s were not going to work for me.

The bad news is that to go up to a fixed performance tier (ie, m3) was going to be more expensive, especially since they didn’t even offer the same specs as t2 tiers. Check out the difference in specs between t2 and m3:

ec2-instance-types-m3

Onto the m3 tier

Time to spin up some some m3 instances and test all over again. I didn’t have time to test several sites again, so I setup testing one url and swapped out agents in the middle – started with an m3.medium then transitioned to an m3.large:

m3-large

Wow. Now there’s a big difference. The m3.large has an incredible consistency that I haven’t seen in any of my testing yet. It’s obvious that large is the way to go. I hate to accept that because they’re expensive. At $0.266/hour it’s going to increase my costs quite a bit!

Looking at the specs between a m3.large I wondered if I could use another tier of instances that would be cheaper, ie compute or memory optimized. And good news – there is. a c3.large instance has about the same specs (minus a little RAM) and is cheaper:

ec2-instance-types-c3

But let’s put it to the test:

to-c3

Can you see the moment I switched from a m3.large to a c3.large? I can’t either. They perform identically as webpagetest agents and a c3 is less expensive.

Conclusion

I hope you just skipped down to this section, because that was a lot to go through. The simple answer for which EC2 agent size you should use: it depends.

  • If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month)
  • If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month)
  • If you’re running lots of tests per hour, you can’t use t2’s –  the most efficient agent will be a c3.large ($135/month)

I don’t know the magic number of tests per hour that you can get away with using a t3 instance (before they’re maxed out on bursts), so maybe someone else can do those tests. But if you’re attempting to get away with a burstable tier just be warned that you may eventually become inconsistent after enough tests.

I hate to be suggesting everyone spends money on a c3.large instance, as it’s not cheap, but it’s proven to be the best for all cases. I’ve been running one now for months, and it’s incredibly stable.

Note: Since running these tests, I see that a new generation of c4.large types have come out. However, they’re more expensive than the c3’s, and since c3’s are amazing, I’m not going to bother upgrading.

Hope that helps everyone decide! Feel free to ask questions/comment below.

Deleting old tests on a private WebPageTest server

In a prior post I mentioned that I had ran out of disk space on my private webpagetest instance. I temporarily fixed it by mounting another drive for the results, but what happens when that disk filled up? I’d be in the same situation all over again!

I decided in order to avoid another full disk I wanted to delete old tests to make space for new ones. I really don’t need to be keeping every old tests around, as the site most likely had changed if it was that long ago, thus the results are obsolete.

Where to delete old tests

Luckily the structure of the results is made in such a way that deleting old tests is extremely easy. Everything is first organized into folders by year, then mounth, then day.

For example, a test I ran yesterday (Jan 8th, 2015) is stored in /www/webpagetest/result/15/01/08/….
If I don’t want any of the tests from yesterday, I can delete that ’08’ directory.

I can login every once in a while and manually delete old months, years, etc with a simple ‘rm -rf’ command on the directory I want to remove. This is the easiest solution to perform, but requires regular maintenance.

Automatically deleting old tests

I hate the idea of manually deleting old tests, as it takes work for me to remember to do that, and who wants more work? Let’s figure out a way to do it automatically so I never have to login and the server will hum along just fine years from now.

If you’re a unix guru you could probably whip up a shell script that would delete old files. But WebPageTest already has something better than that built in – the ‘archiving’ feature. By default, you set how many days to keep tests locally and then after that old tests will be sent to an archive location.

We don’t need the tests archived, but after they are archived they are deleted. All we need to do is disable the archiving, but let the deleting continue and we have our solution.

It’s really simple to modifiy archive.php to not archive, just delete, by commenting out a block of code in the CheckTest function of archive.php

Here’s the code I commented out:

code-modification

Now all you need to do is add the cli/archive.php script as a cron job so it will be run regularly and we’re set!

[root]$ sudo crontab -u apache -e
HOME=/var/www/webpagetest/cli/
0 * * * * php archive.php

Finally set archive_days and archive_dir in settings.ini. Although the archive_dir isn’t used, it must be set to something for the archive script to run properly.

Note: If you copied your previous test results contents to the new results disk, every test will have a recent ‘last accessed’ time, so you’ll need to wait at least as many days you specified in your archive_days before it starts archiving any of your previous tests.

You’re done, hope that helps!

Fixing a full disk on WebPageTest

In a previous post I described how I determined that my private WebPageTest server hosted on Amazon EC2 had a full disk. Here I walk you through what I did to create more disk space so I could run web page tests again.

Disk Space on Amazon EC2

I was using a m3.medium instance type which only has 15GB volume attached (though only 8 GB usable)

Since we’re on Amazon’s elastic platform, it should be easy to scale and add more disk space, right? After all, we’re in “the cloud” and things look cheap. I could add another 100Gb for only $10/month.

Seems like something I’ll have to consider, but let’s consider all of our options.

Enabling archiving

I thought about enabling the archiving feature of WebPageTest. You specify an archive directory (or Amazon S3 bucket) and then test will be zipped and stored elsewhere. This sounded like good solution, until I ran a quick command to zip one of the tests:

[root]# zip -r QN.zip QN
[root]# du -sh *
916K QN
712K QN.zip

As you can see, it only saved 20% of each test. That’s a temporarily solution, not one that’s going to be sustainable in the long run. Let’s plan on enlarging the disk itself.

Enlarging the server’s drive

I next attempted to enlarge the existing instance’s disk space by following this tutorial: Expanding the Storage Space of a Volume

I got close to completing this, but then I ran into an error:

[root]$ sudo resize2fs /dev/xvda1
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 2098482 blocks long. Nothing to do!

According to the documenation I was going to have to follow another huge set of steps in this tutorial: Expanding a Linux Partition

I give up, there has to be a better way.

Attaching another disk volume

What if instead of expanding and repartition the existing drive, I just add another drive to the server? If I can mount it directly to the /var/www/webpagetest/results directory then I wouldn’t even have to move the location of the test results!

Here’s the tutorial that shows you how to add another disk to your EC2 instance: Add a Volume to Your Instance

I renamed the current results directory to ‘results_old’, then once the new drive was mounted I copied everything from results_old to results. The permissions were off, so in order to make them accessible and writeable by apache you just need to change the owner:

sudo chown -R apache:apache results

And just like that, we’re back in business with a new large place to store our test results.

Still not completely solved

Even though we added more space, I noticed I’d never be able to add more than 1000GB, and with each test taking up about 1 MB (I had video screenshots enabled), I’d reach my limit in a couple months (I was running a lot of tests!). I needed a more long term solution than that.

I ended up also deleting really old tests every night so that I could make room for new tests. Do I really need to keep a test around that ran 6 months ago? The site has most likely changed drammatically! Read about deleting old webpagetest results, then your server will be maintenance free!