ChannelDB2

One of DB2 9 features heavily marketed by IBM is without doubt STMM. When i heard about it first time, i didnt liked that idea. What a nonsense, database administrators trained performance tuning for years and now they can be replaced by some program? What can program know about database tuning. After thinking about it for a while - if we take human knowledge about db tuning and put it into program it might do something. But i was kinda skeptic because all previous attempts to do auto tuning of such complicated system as RDBMS failed and auto tuning systems did more harm than benefits.

I did few experiments with STMM in DB2 9.5 and 9.7. STMM in 9.7 is better. It is still not perfect but i recommend to use it in production. Start with STMM and if there are no performance problems, keep it on, If there are turn off AUTOMATIC for selected areas. If this does not help, then turn it off and start polishing his tuning. In some installations with changing workloads you can run STMM for each workload type, record his values, turn it off and then change db2 settings via script before your workload changes. You will have still full control over database and keep all benefits from STMM tunings.

STMM needs about 20 minutes to tune for selected workload (it can be about 10 minutes if you have lot of spare RAM). If you are running benchmarks, they needs to be about half hour to get stable numbers. STMM is not suitable for low memory conditions, such as virtual machines. While DB2 can still operate in 256MB RAM, STMM is unable to tune it properly.

If you need to tune DB2 for small virtual machine, run DB2 in large virtual machine (I use 2GB RAM) and hardcode STMM settings. Then shrink memory as much as possible but keep system without swapping.

STMM uses discovered free memory on system. Unlike Oracle no other database related tuning is needed - just change amount of memory allocated to virtual machine and STMM will use it. This makes DB2 administration very easy especially in virtual environment.

I am attaching 2 graphs showing STMM tuning in time. You can find lot of such graphs on internet, especially they are quite common in IBM made DB2 v9 presentations, so i choose to show graphs with STMM failures instead. They measure transactions per second on DB2 9.7 running in virtual machine. DB2 9.5 shows similar behavior but its about 30% slower. DB2 9.7 claims to be tuned for VMware and i can confirm that VMware performance really increased over 9.5. It was not too slow in DB2 9.5, we were happy with VMware performance.

Lets take a look on graphs. As you see performance is oscillating between two states about 1:1 duration. This is caused by low memory available in VM. If you hardcode STMM tuned values into DB2 you can get nicely tuned performance. Instead of disabling STMM is often better just to hardcode buffersize and database_memory and keep STMM active because it can tune other things such as sort buffer area and locklist. Keep some spare memory in database_memory arena.

We can compare these 2 graphs and see what happens after extending memory by 200 MB RAM. It is not enough for removing hip-hop effect, but your average TPS will increase and STMM will tune database bit faster. Hip hop effect can be also caused by bad physical database design, such as missing indexes, less accessed lobs placed into heavily loaded tables and such.

Views: 1578

Comment

You need to be a member of ChannelDB2 to add comments!

Join ChannelDB2

Featured Downloads

Try BLU Acceleration on Cloud

© 2019   Created by channeldb2.   Powered by

Badges  |  Report an Issue  |  Terms of Service