ORION IO Calibration Cookbook


ORION is one of the most important calibration tools of my toolbox. In any project before installing any Oracle software (clusterware, ASM or RDBMS), the first thing I do is to calibrate disks of the system. That’s because history is full storage related Oracle stories.

In this post, you will find a simple cookbook of calibrating a storage array by using ORION. I will go through a real calibration test I did three months ago.

Preparing a .lun file

In any ORION test, the first thing you should do is to prepare a LUN file with extension .lun to list your devices (with the assumption that you will be testing raw devices. ORION also let you to test file systems in which you should use synthetically created large files in place of device names) to be calibrated. Here is our lun file

[oracle@consol10g orion]$ cat mytest.lun
/dev/dm-2
/dev/dm-3
/dev/dm-4/dev/dm-5
/dev/dm-6

 

One important point is that you should ensure that the OS user you will be performing the test has the necessary grants over the devices you will be calibrating.

Small Random & Large Sequential Read Load

I always start with pure small (8K) random and then pure large (1M) sequential read load calibration first. That is simply because if these two tests fail to satisfy your performance needs there is no point in doing any other tests. You can simply call back your vendor or infrastructure team.

Here is the ORION syntax we will be using

[oracle@consol10g orion]$ ./orion_lnx     -run advanced -testname mytest -num_disks 40 -simulate raid0 –write 0                                           –type seq –matrix basic –cache_size 67108864 -verbose

 Let us discuss this parameterization. In order to control other options in detail I have set run option to be advanced. testname option should set to be the name of .lun file. num_disks is the number of disks in your storage array. Actually, you can set this parameter to any value you like it is just an input for calculating the maximum number of small IO and large IO requesters (for value less than 10 max small IO requesters= 5*num_disks and max large IO requesters = 2*num_disks).simulate parameter is always raid0 me, because I always use ASM. type option defines the type of large IOs (sequential/random). Since the test covers pure random and pure sequential IO loads, matrix option is set to be basic. cache_size is a critical parameter you should learn by get in touch with your storage admin. This is the size of your storage array cache in MB. If you set this parameter too low your outputs will be too good to be true, if you set too high your test run time will be too long. verbose is the parameter to guide ORION to print progress status on standard output.

Keep in mind that you should run ORION from the same directory where mytest.lun file resides. When the execution is over you will find three new files in your current working directory. Those are mytest_*_summary.txt, mytest_*_iops.txt, mytest_*_lat.txt, mytest_*_mbps.txt files.

Figure 1 Pure Random and Pure Sequential Loads

As you see the number of read IOPS is increasing with the increasing number of outstanding random read requesters to a certain point. This is what we expect to see from queuing theory. However, there is another restrictive parameter over that, called latency. If the service time of a single read request exceeds 10ms on the average, (some others say 20ms) then the user will start to suffer.

So keep in mind that doing ~100.000 IOPS means nothing. What is meaningful is that you can deliver ~3500 IOPS with 19.5 ms service time. In other words never use IOPS metric barely. Always cascade average service time, latency, queuing time, etc metrics.

The interpretation of sequential IO is different. In high throughput demanding systems, nobody will deal with the service time of a single IO request. The import thing is to fully utilize the whole storage infrastructure in order to deliver the highest rate of throughput possible. Therefore, “160 MB/s read throughput” statement is OK.

 

Mix Read Load

The last thing you should test is the behavior of storage array under various mix loads. Let me exemplify the importance of this test. You might see some systems running in peace whole day long but 1-2 hours night time (backup window). Alternatively, you might see some DBAs periodically kill reporting users in OLTP systems.

Those are all related with insufficient IO configurations. Large IO requests caused by backup, reporting,etc activites may result in a severe change in the service time of small IO requests. Therefore, you should have a priori knowledge in what proportion, small and large IO requests can cause a problem on your storage infrastructure. Here is the ORION syntax

[oracle@consol10g orion]$ ./orion_lnx     -run advanced -testname mytest -num_disks 40 -simulate raid0 –write 0                                           –type seq –matrix detailed –cache_size 67108864 -verbose

 Notice that the only change in here is that matrix option is set to be detailed. When matrix option is basic, ORION generates random IO at N different levels without sequential IO requesters and then it generates sequential IO at M different levels without random IO requesters. When the option equals to detailed, ORION generates all possible MxN combination of random & sequential IO genererators.

 Figure 4 Mix Read Load

To interpret Figure 4, let’s think that our storage array is capable of serving only 8K requests. Any larger requests will be chopped into 8K pieces. That means a large IO request will be corresponding to 125 small IO requests. Moreover think that the total capacity of our storage array is 2000 small IOPS. Now by simple division you can either yield 2000 small (8K) IOPS or 16 large (1M) IOPS from this storage array or somewhere between.

So when the number of total large IO requesters increase, the number of total IOPS will decrease.

Now assume that sustaining 1500 IOPS requires 10 ms, 3000 IOPS requires 20 ms service time on the average. While we are sustaining 1500 IOPS, we can either move on large requester axis and with an addition of 12 IOPS we can reach 20 ms latency, or we can move on small requester axis and with an addition of 1500 IOPS we can reach 20 ms latency (We may choose a third option somewhere between also). As a result increase in large IO results in an increase in service time also.

Conclusion

In this post, I try to show how ORION can help you in detecting your possible disk performance problems before they occur on production. Remember that on an enterprise storage infrastructure there are more than just a bunch disks. The major performance problem can be related to components like HBAs, switches, port issues, backend problems, or even IO scheduling algorithm you use.

Mathlab Codes to Create Charts

In order to generate the plots you’ve seen in this post you can use orion_chart_basic.m and orion_chart_detailed.m scripts with mathlab. When you execute them(F5), they will ask you to choose a directory. This directory should contain all ORION output files of a single run. Once you pick the directory, mathlab will do the rest.

CAUTIONS

  • For tests executed with run option set to be basic you should use orion_chart_basic.m
  • For tests executed with run parameter set to be detailed you should use orion_chart_detailed.m
  • Each directory you choose should contain one and only one set of ORION run output file.
  • All four mathlab files should be in the same working directory.

randomIOChart.m

function randomIOchart(concSmallIO, IOPS, latency)
[haxes,hline1,hline2] = plotyy(concSmallIO,IOPS,concSmallIO,latency,’plot’);
axes(haxes(1));
xlabel(‘Outstanding Small IO’)
ylabel(‘IOPS’);
title(‘Random I/O’,’FontSize’,14)
j = 1;
[m n]=size(latency)
while latency(j) < 20
    j = j + 1;
    if n == j,
        break;
    end

end

 

text(concSmallIO(j-1),IOPS(j-1),[int2str(IOPS(j-1)),’ IOPS with ‘,num2str(latency(j-1)),’ ms latency \rightarrow’ ],’HorizontalAlignment’,’right’,’FontSize’,12)

axes(haxes(2));

ylabel(‘Latency(ms)’);

 

sequentialIOChart.m

function sequentialIOchart(largeIO,mbps)
plot(largeIO,mbps,’-rs’,’LineWidth’,2,…
‘MarkerEdgeColor’,’k’,…
‘MarkerFaceColor’,’g’,…
‘MarkerSize’,10)
xlabel(‘Outstanding Large IO’)
ylabel(‘MB/s’)
title(‘Sequential I/O’,’FontSize’,14)

 

orion_chart_basic.m

%Use this file for ORION test output executed with -run basic option
%When the directory selection pop appears, pick the folder that contains
%ORION run outputs.
orion_directory = uigetdir();
 latency_file = dir(fullfile(orion_directory,’*_lat.csv’));
iops_file = dir([orion_directory,’\*_iops.csv’]);
mbps_file = dir([orion_directory,’\*_mbps.csv’]);
 latency = csvread(fullfile(orion_directory,latency_file.name), 1, 1)
latency = latency(1,:)
small = csvread(fullfile(orion_directory,latency_file.name), 0, 1);
small = small(1,:)

 

iops = csvread(fullfile(orion_directory,iops_file.name), 1, 1);

iops = iops(1,:);

 

subplot(2,1,1);

randomIOchart(small, iops, latency)

 

mbps = csvread(fullfile(orion_directory,mbps_file.name), 1, 1);

large = csvread(fullfile(orion_directory,mbps_file.name), 1, 0);

large = large(:,1);

subplot(2,1,2);

sequentialIOchart(large,mbps);

 

orion_chart_detailed.m

%Use this file for ORION test output executed with -run detailed option
%When the directory selection pop appears, pick the folder that contains
%ORION run outputs.
orion_directory = uigetdir();
 latency_file = dir(fullfile(orion_directory,’*_lat.csv’));
iops_file = dir([orion_directory,’\*_iops.csv’]);
mbps_file = dir([orion_directory,’\*_mbps.csv’]);
 latency = csvread(fullfile(orion_directory,latency_file.name), 1, 1);
small = csvread(fullfile(orion_directory,latency_file.name), 0, 1);
small = small(1,:);
large = csvread(fullfile(orion_directory,latency_file.name), 1, 0);

large = large(:,1);

 

subplot(2,2,1);surf(small,large,latency);xlabel(‘Outstanding Small IO’);ylabel(‘Outstanding Large IO’);zlabel(‘Latency (ms)’);

 

iops = csvread(fullfile(orion_directory,iops_file.name), 1, 1);

subplot(2,2,2);surf(small,large,iops);xlabel(‘Outstanding Small IO’);ylabel(‘Outstanding Large IO’);zlabel(‘IOPS’);

 

mbps = csvread(fullfile(orion_directory,mbps_file.name), 1, 1)

small = csvread(fullfile(orion_directory,mbps_file.name), 0, 1);

small = small(1,:)

large = csvread(fullfile(orion_directory,mbps_file.name), 1, 0);

large = large(:,1)

subplot(2,2,[3 4]);surf(small,large,mbps);xlabel(‘Outstanding Small IO’);ylabel(‘Outstanding Large IO’);zlabel(‘MB/s’);

About kocakahin

Just a computer engineer

Posted on March 31, 2009, in Oracle, Uncategorized and tagged , , , . Bookmark the permalink. 6 Comments.

  1. Nice article, there is few of orion information out there, very appreciated.
    Question, I have a new Dell md3000 that I want to test out.
    The final configuration should be 1 disk group DATA sitting on top of a 14 disks hardware RAID10.
    Do I need the -simulate raid10 even if my raid10 is already configured at hardware level?

    • Hi Steeve,
      The reason why we set -simulate option be raid0 with ASM is that ASM naturally stripes data across all 14 LUNs prepared to be raid10. Once you use ASM in your case you will be using so called “double striping” which is perfectly OK. The only thing you should be careful is that if you will set stripe depth of ASM diskgroup to a different value than the default one (1M), you should set the corresponding -stripe option in ORION also.

  2. It was really a very nice article. Thanks a lot for putting this.

    1.Where can I get the mathlab software to draw charts of Orion results.

    2. -stripe option is not seen in orion

    I am testing orion for new storage configuration.

    Regards
    Prasad

  3. 1. Matlab is a commercial product so you can not find it for free. But open source Octave will do (I believe) the same work for slight modifications.
    2. Following runs will do the test you need for different stripe sizes (stripe parameter is in KBs):
    orion -run advanced -testname mytest -num_disks 3 -simulate raid0 -stripe 512
    orion -run advanced -testname mytest -num_disks 3 -simulate raid0 -stripe 1024
    orion -run advanced -testname mytest -num_disks 3 -simulate raid0 -stripe 2048

  4. In Typical SAN Storage, what’s the expected IOPS and Latency and service time we should expect..


    Oracle DBA.

Leave a comment