Very slow query on multi-million row table

Hello. I have a table with 20 millions of rows (and nearly 30 columns). Problem is, that a query takes near a half an hour. Another problem - I need to search every time with new WHERE options. So if I will create indexes for this search - it would take to create near a hunded of its.
Can anybody help me with this problem?

20 mill and 30 columns?

I think you need also separate fields from one table to some other tables…

Do you have ANY index? 20 Mio. rows should not be the problem, 30 columns neither. I have a table with ~70 Mio. rows (ok, only 8 cols) and it works well - for all queries that can use an index.

What’s the output of “show indexes from <table_name>”?

You MUST generate some indexes, at least for the mostly used queries.

Or - if you have indexes - what’s the output of:

use information_schema;

SELECT *
FROM TABLES
WHERE TABLE_SCHEMA LIKE ‘melin_system’
AND TABLE_NAME LIKE ‘SC_DETAILS’;

[B]sp1r1t wrote on Wed, 13 February 2008 15:13[/B]
- I need to search every time with new WHERE options. So if I will create indexes for this search - it would take to create near a hunded of its.
You must at least have some columns that are more common in your WHERE's than others. If you analyze your query pattern and find these columns(and they have a high enough cardinality) then you should at least be able to create indexes on them and utilize them in your queries.

It is very few occasions where no indexes at all is the optimal solution.
At those times the only way you can speed things up is by brute force with the fastest CPU available and enough RAM so that the entire table is always cached in RAM.