February 19, 2011: Problem and Hopeful Resolution
I had hoped deletes would increase load a bit, but not so much as to slow down parsing and pageloads considerably. No such luck. Deletes started happening, and about 10 of every 15 minutes would be disk thrash as stuff was found and tossed. Normally I'd be annoyed but not concerned about such things. However, parses took much longer (20-50 seconds instead of 4-10), CPU load went up to 3.0-3.5, and I/O wait doubled. The parsing queue reached about 60 realms deep after an hour, when really the queue should get no longer than 5-10 realms, and stick near 0 most of the time.
So, what to do? I finally had the excuse to pick up that SSD. ;) Thanks to the many generous donations this month, I could afford to purchase a 64GB Intel X25-E SSD, an enterprise-level SLC solid state disk that'll support magnitudes more disk operations per second. The SLC architecture should also be able to endure the many writes that this project puts to a hard drive all the time. This should allow me to continue supporting all US realms while pruning data, and offer quicker pageloads as well.
I had done some research on Intel's upcoming enterprise-class SSDs, and it looks like their next gen offerings (100gb, 200gb, 400gb) were delayed until at least 2011Q3. So the price wasn't going to drop much anytime soon. Hopefully it'll be worth the money.
In the meantime, I'm not doing any deletes, so the database is still growing slowly with every crawl parsed. I should get the new drive in the middle of this week, when I'll set aside some scheduled maintenance time (probably 2-3 hours) to clean up the tables and move them to the new drive.
February 20, 2011, 03:36:32 nudave wrote:
Awesome! I always wanted an SSD :)
February 20, 2011, 10:19:15 Markus wrote:
Nice, do you have some more stats for the nerds? I.e. writes/reads per day/per second/peaks etc oh and how big is the database for an average realm/faction?
February 20, 2011, 20:45:27 admin wrote:
Can't really break up the db per realm/faction, as it's not stored that way. The whole US is around 30GB in the database right now, with over 200 million auctions over the past 2 weeks.
With auction data deletes disabled, here's iostat -m:
Since Nov 23rd (which includes our downtime from Dec 21 thru Feb 4), some MySQL statistics:
February 20, 2011, 23:19:13 Markus wrote:
Thanks! 2.83MB/s write really is a lot!
If I am right this means most data is never read back? Could you elaborate on why that is the case? Don't you need to read in the previous scan again to check for differences upon updating?
February 21, 2011, 01:31:25 admin wrote:
I guess there are fewer reads because I have 16GB of RAM for InnoDB to do its caching magic for reads, and it commits all writes to disk whenever it can.
February 21, 2011, 16:28:24 Clint wrote:
Technology win right here - really happy to see the donations have picked back up so that you have access to the kind of hardware you need, Erorus.
February 22, 2011, 03:05:08 Coop wrote:
Have you considered adding partitioning on your tables? If you can partition them by date, your deletes can be trivial.
February 22, 2011, 03:33:40 admin wrote:
Coop: Partitioning is not viable for this project. I suspect there's a bug with using partitioning with innodb one-file-per-table setups.