Why would fixed length rows make it any easier? All it has is a huge amount of 16 kB pages which are filled up to 15/16 by default but due to deletion and adding of rows (adding is done in PK order, so it could very well be done in an existing page not being the last page) is anywhere between half and completely full. In every page, it has to check for every row if it has been added since the current thread entered repeatable read mode, or has been deleted by another thread.
The concept of fixed length and dynamic length rows that MyISAM has (which I’m guessing is what Mr innvo was thinking about) is a pure MyISAM internal thing and is not applicable to InnoDB.
The reason why MyISAM can be very fast at counting all rows in the table is because it has an optimization that stores the row count of the table in the meta data for the table. But special type of optimization can only work when you have table level locking like MyISAM does, since that is the only time you can be sure of the exact amount of rows in the table.
If you move to more fine grained locking like for example row level locking that InnoDB uses you can no longer have a global count of rows for the table since you can’t be sure that another transaction hasn’t modified the table.
That is why most storage engines/DBMS need to actually count all rows each time you issue such a query.
The things you can to do optimize it is to:
Use a small data type as primary key: INT, BIGINT (important)
General advice to have a lot of RAM for the innodb_buffer_pool to avoid disk reads.