Início > AIX > Virtual SCSI

Virtual SCSI

Virtual SCSI

VIRTUAL SCSI

Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the physical resources and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI backing storage devices provided by the Virtual I/O Server as clients.

Virtual SCSI server adapters can be created only in Virtual I/O Server. For HMC-managed systems, virtual SCSI adapters are created and assigned to logical partitions using partition profiles.

The vhost SCSI adapter is the same as a normal SCSI adapter. You can have multiple disks assigned to it. Usually one virtual SCSI server adapter mapped to one virtual SCSI client adapter will be configured, mapping backing devices through to individual LPARs. It is possible to map these virtual SCSI server adapters to multiple LPARs, which is useful for creating virtual optical and/or tape devices, allowing removable media devices to be shared between multiple client partitions.

on VIO server:
root@vios1: / # lsdev -Cc adapter
vhost0  Available       Virtual SCSI Server Adapter
vhost1  Available       Virtual SCSI Server Adapter
vhost2  Available       Virtual SCSI Server Adapter

The client partition accesses its assigned disks through a virtual SCSI client adapter. The virtual SCSI client adapter sees the disks, logical volumes or file-backed storage through this virtual adapter as virtual SCSI disk devices.

on VIO client:
root@aix21: / # lsdev -Cc adapter
vscsi0 Available  Virtual SCSI Client Adapter

root@aix21: / # lscfg -vpl hdisk2
hdisk2           U9117.MMA.06B5641-V6-C13-T1-L890000000000  Virtual SCSI Disk Drive

In SCSI terms:
virtual SCSI server adapter: target
virtual SCSI client adapter: initiator
(Analogous to server client model, where client is the initiator.)

Physical disks presented to the Virtual I/O Server can be exported and assigned to a client partition in a number of different ways:
– The entire disk is presented to the client partition.
– The disk is divided into several logical volumes, which can be presented to a single client or multiple different clients.
– With the introduction of Virtual I/O Server 1.5, files can be created on these disks and file-backed storage can be created.
– With the introduction of Virtual I/O Server 2.2 Fixpack 24 Service Pack 1 logical units from a shared storage pool can be created.

The IVM and HMC environments present 2 different interfaces for storage management under different names. Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can refer to both logical volumes and storage pool backing devices.

Once these virtual SCSI server/client adapter connections have been set up, one or more backing devices (whole disks, logical volumes or files) can be presented using the same virtual SCSI adapter.

When using Live Partition Mobility storage needs to be assigned to the Virtual I/O Servers on the target server.

—————————-

File Backed Virtual SCSI Devices

Virtual I/O Server (VIOS) version 1.5 introduced file-backed virtual SCSI devices. These virtual SCSI devices serve as disks or optical media devices for clients.

In the case of file-backed virtual disks, clients are presented with a file from the VIOS that it accesses as a SCSI disk. With file-backed virtual optical devices, you can store, install and back up media on the VIOS, and make it available to clients.

—————————-

Check VSCSI adapter mapping on client:

root@bb_lpar: / # echo “cvai” | kdb | grep vscsi                             <–cvai is a kdb subcommand
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C01A83C0
vscsi0     0x000007 0x0000000000 0x0                aix-vios1->vhost2        <–shows which vhost is used on which vio server for this client
vscsi1     0x000007 0x0000000000 0x0                aix-vios1->vhost1
vscsi2     0x000007 0x0000000000 0x0                aix-vios2->vhost2

—————————-

Managing VSCSI devices (server-client mapping)

1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (create vscsi adapter, name the client which can use it, then create the same in profile)
(the profile can be updated: configuration -> save current config.)
(in case of an optical device, check out any client partition can connect)
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (create the same adapter as above, the ids should be mapped, do it in the profile as well)
3. cfgdev (VIO server), cfgmgr (client)                        <–it will bring up vhostX on vio server, vscsiX on client
4. create needed disk assignments:
-using physical disks:
mkvdev -vdev hdisk34 -vadapter vhost0 -dev vclient_disk    <–for easier identification useful to give a name with the -dev flag
rmvdev -vdev <backing dev.>                                <–back. dev can be checked with lsmap -all (here vclient_disk)

-using logical volumes:
mkvg -vg testvg_vios hdisk34                               <–creating vg for lv
lsvg                                                       <–listing a vg
reducevg <vg> <disk>                                       <–deleting a vg

mklv -lv testlv_client testvg_vios 10G                     <–creating lv what will be mapped to client
lsvg -lv <vg>                                              <–lists lvs under a vg
rmlv <lv>                                                  <–removes an lv

mkvdev -vdev testlv_client -vadapter vhost0 -dev <any_name>        <–for easier identification useful to give a name with the -dev flag
(here backing device is an lv (testlv_client)
rmvdev -vdev <back. dev.>                                  <–removes an assignment to the client

-using logical volumes just with storage pool commands:
(vg=sp, lv=bd)

mksp <vgname> <disk>                                       <–creating a vg (sp)
lssp                                                       <–listing stoarge pools (vgs)
chsp -add -sp <sp> PhysicalVolume                          <–adding a disk to the sp (vg)
chsp -rm -sp bb_sp hdisk2                                  <–removing hdisk2 from bb_sp (storage pool)

mkbdsp -bd <lv> -sp <vg> 10G                               <–creates an lv with given size in the sp
lssp -bd -sp <vg>                                          <–lists lvs in the given vg (sp)
rmbdsp -bd <lv> -sp <vg>                                   <–removes an lv from the given vg (sp)

mkvdev…, rmvdev… also apllies

-using file backed storage pool
first a normal (LV) storage pool should be created with: mkvg or mksp, after that:
mksp -fb <fb sp name> -sp <vg> -size 20G                   <–creates a file backed storage pool in the given storage pool with given size
(it wil look like an lv, and a fs will be created automatically as well)
lssp                                                       <–it will show as FBPOOL
chsp -add -sp clientData -size 1G                          <–increase the size of the file storage pool (ClientData) by 1G

mkbdsp -sp fb_testvg -bd fb_bb -vadapter vhost2 10G        <–it will create a file backed device and assigns it to the given vhost
mkbdsp -sp fb_testvg -bd fb_bb1 -vadapter vhost2 -tn balazs 8G <–it will also specify a virt. target device name (-tn)

lssp -bd -sp fb_testvg                                     <–lists the lvs (backing devices) of the given sp
rmbdsp -sp fb_testvg -bd fb_bb1                            <–removes the given lv (bd) from the sp
rmsp <file sp name>                                        <–remove s the given file storage pool

removing it:
rmdev -dev vhost1 -recursive
—————————-

On client partitions, MPIO for virtual SCSI devices currently only support failover mode (which means only one path is active at a time:
root@bb_lpar: / # lsattr -El hdisk0
PCM             PCM/friend/vscsi                 Path Control Module        False
algorithm       fail_over                        Algorithm                  True

Dual VIO VSCSI mapping (multi-pathing):

0. on VIO server: chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes -perm
for checking: lsdev -dev fscsi0 -attr (reboot needed)

1. create VSCSI adapters on vios1 and vios2 and on client
(after that cfgdev and cfgmgr)

2. change reserve policy for disks on vio server to no_reserve:
lsdev -dev hdisk32 -attr
chdev -dev hdisk32 -attr reserve_policy=no_reserve

3. create mapping on vio servers (vios1 and vios2):
mkvdev -vdev hdisk34 -vadapter vhost0 -dev BUD_vg

4. after cfgmgr on client, and lspath will show dualpath:
/root # lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1

5. on client set the attributes:

  ON DISK:
-health check, health check mode: (path status updated automatically)
hcheck_mode=noncative    <–heath check commands using those paths which have no active I/O
hcheck_interval=60       <–how often the health check is performed (60 seconds)
(if it is 0, healthchecking is disabled and path will not come back automatically after it is again enabled)

chdev -l hdisk0 -a hcheck_interval=60 -a hcheck_mode=nonactive -P

-queue_depth: (it determines how many request the disk queue to the virtual SCSI client driver)
this value on the client should be matched with the value used for the physical disk on the VIO Server

chdev -l hdisk0 -a queue_depth=20 -P

ON ADAPTER:
path timeout feature on vscsi adapters:(it allows the client to check the health of the VIO Server adapter)
chdev -l vscsi0 -a vscsi_path_to=30 -P
(30 is minimum)

-set error recovery to fast_fail
chdev -l vscsi0 -a vscsi_err_recov=fast_fail -P

———–

SET PATH PRIORITIES:
By default all the paths are defined with priority 1 meaning that traffic will go through the first path.
(Priority 1 is the highest priority, and you can define a priority from 1 to 255)

If you have dual paths the first path is using vscsi0
If you want to control the paths ‘path priority’ has to be updated.
Priority of the VSCSI0 path is changed to 2 so that it becomes a lower priority.
Priority for the VSCSI1 devices remains at 1 so that it is the primary path.

# chpath -l hdisk0 -p vscsi0 -a priority=2
path Changed

 # lspath -AHE -l hdisk0 -p vscsi0
attribute value description user_settable
priority  2     Priority    True

 # lspath -AHE -l hdisk0 -p vscsi1
attribute value description user_settable
priority  1     Priority    True

———–

6. testing:
if vios1 or vios2 is rebooteed (or the backing devce removed (rmvdev)), on the client 1 path will go down:
(because this vios has the higher priority)
# lspath
Enabled hdisk0 vscsi0
Failed  hdisk0 vscsi1

in errpt, there will be an entry:
DE3B8540   0201121211 P H hdisk0         PATH HAS FAILED

 

 

Fonte: https://aixexpert.wordpress.com/aix-virtualization-2/in-virtualisation/

Anúncios
Categorias:AIX
  1. Nenhum comentário ainda.
  1. No trackbacks yet.

Deixe um comentário

Preencha os seus dados abaixo ou clique em um ícone para log in:

Logotipo do WordPress.com

Você está comentando utilizando sua conta WordPress.com. Sair / Alterar )

Imagem do Twitter

Você está comentando utilizando sua conta Twitter. Sair / Alterar )

Foto do Facebook

Você está comentando utilizando sua conta Facebook. Sair / Alterar )

Foto do Google+

Você está comentando utilizando sua conta Google+. Sair / Alterar )

Conectando a %s

%d blogueiros gostam disto: