I have compiled a new Intel Modular Server multipath driver for Citrix XenServer 6.2 kernel based on instructions on my post:
Download the driver for XenServer 6.2 kernel from here:
I have compiled a new Intel Modular Server multipath driver for Citrix XenServer 6.2 kernel based on instructions on my post:
Download the driver for XenServer 6.2 kernel from here:
Installed the driver (scsi_dh_alua_intelmodular_cbvtrak_xs62.i386.tar) on the Intel Modular Server with XenServer 6.2 (build 70446c), but I have 2 errors:
– “multipath -ll” shows “invalid keyword: prio_callout” error
– “cat initrd-$(uname -r).img.cmd” doesn’t work
According to a RedHat forum the prio_callout is replaced with prio, but that will cause other errors
Regarding the cat initrd issue, I was able to recreate the initrd with “source initrd-$(uname -r).img.cmd”
Have you tested your driver with a clean install?
I have installed the driver on a fresh copy of 70446c. I am getting the error on prio_callout as well, and I am having intermittent multipath alerts.
[root@ims002b1 ~]# multipath -ll
Aug 21 11:05:38 | multipath.conf line 14, invalid keyword: prio_callout
222ac000155c1a73f dm-2 Intel,Multi-Flex
size=1.7T features=’1 queue_if_no_path’ hwhandler=’1 alua’ wp=rw
`-+- policy=’round-robin 0′ prio=1 status=active
|- 0:0:0:1 sdb 8:16 active ready running
`- 0:0:2:1 sde 8:64 failed ready running
Any idea what can be causing this?
Linux ims002b1 2.6.32.43-0.4.1.xs1.8.0.835.170778xen #1 SMP Wed May 29 18:06:30 EDT 2013 i686 i686 i386 GNU/Linux
I have not tested it yet but I plan to within the next weeks. Looks like we have to investigate how to use the new prio keyword with Intel’s dual storage controllers. I this won’t work, maybe we can still use the old dm-multipath that worked nicely. I will let you know when I have finished testing XenServer 6.2 on IMS
Just for information, looks like XS6.1 has device-mapper-multipath 0.4.7 and XS 6.2 has device-mapper-multipath 0.4.9. In latter the prio_callout keyword has been deprecated. I will try and install the old verson on XS 6.2
Thanks, at this point, I’ll try the 6.1 driver (which worked fine) on a test machine and see if it stabilizes the SR.
So far, I’ve tried the following:
1. Reverted back to 6.1 driver, using multipath.conf that came with 6.1 driver. Same result.
2. Installed 6.2 driver, modified multipath.conf to remove prio_callout and replaced with: prio “aula” without a positive effect.
I also used this document: http://downloadmirror.intel.com/18617/eng/RHEL6_MPIO_Setup.pdf and modified the /etc/multipath.conf changing prio_callout to prio “tpg_pref” — same effect.
Finally got the driver stable, some changes were needed to multipath.conf:
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names no
}
## some vendor specific modifications
devices {
device {
vendor “Intel”
product “Multi-Flex”
path_grouping_policy “group_by_prio”
getuid_callout “/sbin/scsi_id -g -u -s /block/%n”
#getuid_callout “/sbin/scsi_id –whitelisted –device=/dev/%n”
prio alua
path_checker tur
path_selector “round-robin 0”
hardware_handler “1 alua”
failback immediate
rr_weight uniform
rr_min_io 100
no_path_retry queue
features “1 queue_if_no_path”
product_blacklist “VTrak V-LUN”
}
}
Great news John. Did you get it working with the default device-mapper-multipath 0.4.9 in XS 6.2? I have not had a chance to try yet.
Yes, the package you provide just needs to be updated with the mulipath.conf that I provided. I’ve been throwing a ton of I/O at the disk array and zero issues. I would like confirmation of the fix by another person. I even recomplied the driver and got the same size rpm. I think it all works.
Nice. My IMS server is being used in production and I don’t risk to pull out any of the storage modules to test at the moment. I can only test XS 6.2 on one of the modules without pull out test. Did you do the pull test too, preferably several times on different storage modules?
Here’s some more testing done by John:
All tests passed, I took a unit that was pre-production and did the following:
1. Removed link from SAS controller #2 to external disk array during I/O — no issues.
2. Replaced link, waited 30 seconds for multipath — no issues.
3. Removed link from SAS controller #1 to external disk array during I/O — no issues.
4. Replaced link, waited 30 seconds for multipath — no issues.
5. Removed SAS controller #2 internal drives with heavy I/O — no issues.
6. Replaced SAS controller #2 — no issues.
All good!
John
Here’s a good tutorial on how to test multipath failures without physically removing the controllers: http://thjones2.tumblr.com/post/1602318441/linux-multipath-path-failure-simulation
Hi all,
How did get it working with Xenserver 6.2 on IMS. Xenserver reports constantly changes on multipath.
We have dual controller module on IMS. Everything works great expect on Storage Pool1 where all the Xenserver 6.2 boot partitions installed.
I have made the changes on multipath configuration but not working.
Could you please write down the instructions.
thanks all