Where I work we are contemplating using MQTs (Materialized Query Tables) for aggregating data on a monthly basis. We have had a debate as to whether this technology would be a good fit or whether we should be creating tables and writing a COBOL program to load the data once a month. Details:
1. We'll be loading millions of rows on a monthly basis.
2. The previous month's data will be deleted when the new month's data is loaded.
3. The data can be static during the month or stated differently, the data only needs to change once a month.
4. There is a debate that the load process, which could use a load utility could be more expensive if MQTs are used. I don't believe this is true, but am open to comments.
5. I have used MQTs on LUW for one project, but not on z/OS. They worked fine, but it seemed to me that we could have just defined tables and written load scripts to load the data in the tables to achieve the same result.
I don't know whether this is true, but it appears to me that DB2's optimizer can choose to retrieve data from an MQT (for instance, aggregated data) when the SQL is written to select summarized data from the base table. If this is true then it sounds like MQTs can increase performance of existing SQL. SQL which did not have to be rewritten to access the MQT directly?
The safe thing to do is to just define tables and load them every month. However, I'm wondering whether there are benefits that I cold be missing out on by using this approach. It seems to me that the load process would be less expensive with an MQT since a program would not be required to access the database to load the data. It seems to me that the refresh script would run and teh data load would occur on the database side.
Thoughts and experiences are appreciated.