[SOLVED] Maximum InnoDB table size

Hello,

I am currently doing some benchmarks for the launch of an online service and I was just concerned about InnoDB maximum table size. I am not touching the topic for performance at the moment.

So my question is quite simple and I think many people already asked it:

  • What is the maximum possible size for an InnoDB table?

I have found the easy answer which is 64 TB, but I want to dig a bit to find a real proper answer.

First, I should think that the file system of the server will bring limits. Ext3 can handle maximum 2TB per file and maximum 32 TB for the entire file system with 8k blocks. So this already narrows it to 32TB in my case. Is this a first hard limit that will hit me?

Second, as far as I understood it is possible to place InnoDB files in various system name spaces. One thing I did not understand is how can I specify to InnoDB to store files of X MB (<2TB) for the first 32TB on A device and then continue with extra files of X MB (<2TB) for the second 32 TB and so on.

Third, last but not least what happens when the limit of 32 TB is reached for a table?

I know this is purely paper questions, but I prefer to know before so I can take the right decision and not regret that I have not spent this time now to think on this.

Any hint is most welcome.

  • Greg

w00t wrote on Thu, 03 February 2011 16:40

You should, because that is what’s going to be your limit.

w00t wrote on Thu, 03 February 2011 16:40

No because you are talking about one volume, if you have a server with this size of disk space you won’t have all this in one volume.
And besides you can change to run ext4, xfs or zfs which allows much larger volumes than ext3 if you need to.

w00t wrote on Thu, 03 February 2011 16:40

You specify the home dir to be empty which means that you have to specify the absolute path to all the data files you define since the entire path is a concatenation of the two. E.g:

innodb_data_home_dir =#innodb_data_file_path = /volume1/db/ibdata1:2000M; … ;/volume10/db/ibdata2:2000M:autoextend#

w00t wrote on Thu, 03 February 2011 16:40

You will get a:
manual

w00t wrote on Thu, 03 February 2011 16:40

Yes it is, and frankly all these numbers are so large that they don’t pose a problem and I would be really surprised if you hit them.

The real problem is performance, since you very seldom have the luxury of having an application with a gigantic database that only uses a few percent of it regularly and you can wait for an answer for a couple of days or a week (or longer) if you have a larger query.

Thanks for your reply.

I am very aware of performance issues. In my experience, I think the rule of thumb (or the dumb rule) idea is that until you can have your database fitting in your memory and have a fair CPU performance and quick storage back-end, things are easy to do and performance isn’t a big concern.

I am aware of EXT4, XFS or ZFS. As much as it’s probably the most extensible FS, ZFS on Linux is a no go because it’s partial support in my opinion. I think EXT4 is a good candidate, but still very young, truly XFS seems to be mature and stable, so right yes, this is one problem solved.

Ok, one can separate InnoDB in two files in two different FS. This said, is two is the maximum of files and can it be done afterwards, I mean when these files are already partially filled? I understand that for huge DB, sharding and multiple-copies for performance is the way to go.

My question were more to make sure the right approach is taken now and that things can be addressed later on rather than the opposite.

  • Greg

Yes you seldom have any performance problems as long as the database fits in RAM. The severe problem arises when the size of the indexes alone are so large that they don’t fit in RAM. The random read access pattern of indexes does not scale well on normal harddrives (SSD reduces this problem but they are still pricey).

w00t wrote on Mon, 07 February 2011 09:46

I’ve only had good experiences with EXT4 but yes it is younger and you also need to make sure that backup/restore solution etc can handle it properly.

w00t wrote on Mon, 07 February 2011 09:46

No two is not the maximum, my " … " in the middle and and volum1 … volume10 was meant to symbolize an arbitrary nr of data files.
There is however some upper limit to how long this string can be IIRC.

You can easily add data files later, but removing them is very difficult, basically involving dumping out all data, recreating the data files smaller/fewer and then import all the data again, which can be not feasible if you have a very large database due to the time this would take.