Wednesday, May 20, 2009
op - Open Ports on a System
as well as some information about the port.
##############################################################################
##############################################################################
#!/usr/bin/sh
#
# This utility is used to list all of the open ports on a Solaris UNIX
# based machine. It relies heavily on tools that are used to read through
# the /proc directory so it needs to be run as root or as a root equivalent.
#
# The pathname for all executables should reside in root's path statement
# for ldd to work correctly with this script
#
# Originally posted at: http://www.ilkda.com/op.htm
# Submitter: Alan Pae
#
# nmap is a great tool that will show you what is listening on each port
# of a given machine.
#
# nmap tries to give you some information about each port that it finds.
# It does not assume that any service is listening on any port number and
# it will tell you what is listening on the port if it is told to do so
# and it is able to do so.
#
# However, I felt that the information that was given to me via nmap still
# left a lot of research to do on each port so I threw a script together
# to tell you what process is listening on each port and to give you some
# more information about the process so you can tell at a glance what is
# happening on the system that you are responsible for.
#
# Enjoy the script.
#
#
# Sample output:
#
# -----------------------------------------------------
# Process ID #: 154
#
# Ports used:
# sockname: AF_INET 0.0.0.0 port: 68
# sockname: AF_INET6 :: port: 546
# sockname: AF_INET 127.0.0.1 port: 4999
# sockname: AF_INET 127.0.0.1 port: 4999
# sockname: AF_INET 10.0.0.64 port: 68
#
# COMMAND
# /sbin/dhcpagent
#
# Command Line #2: /sbin/dhcpagent
#
# Environment Variables: 154: /sbin/dhcpagent
# envp[0]: LANG=C
# envp[1]: LD_LIBRARY_PATH=/lib
# envp[2]: PATH=/usr/sbin:/usr/bin
# envp[3]: SMF_FMRI=svc:/network/physical:default
# envp[4]: SMF_METHOD=start
# envp[5]: SMF_RESTARTER=svc:/system/svc/restarter:default
# envp[6]: SMF_ZONENAME=global
# envp[7]: SUNW_NO_MPATHD=
# envp[8]: TZ=US/Pacific
# envp[9]: _INIT_NET_STRATEGY=none
#
# Libraries used: /sbin/dhcpagent
# libxnet.so.1 => /lib/libxnet.so.1
# libnvpair.so.1 => /lib/libnvpair.so.1
# libdhcpagent.so.1 => /lib/libdhcpagent.so.1
# libdhcputil.so.1 => /lib/libdhcputil.so.1
# libinetutil.so.1 => /lib/libinetutil.so.1
# libdevinfo.so.1 => /lib/libdevinfo.so.1
# libdlpi.so.1 => /lib/libdlpi.so.1
# libc.so.1 => /lib/libc.so.1
# libnsl.so.1 => /lib/libnsl.so.1
# libsocket.so.1 => /lib/libsocket.so.1
# libuuid.so.1 => /lib/libuuid.so.1
# libgen.so.1 => /lib/libgen.so.1
# libsec.so.1 => /lib/libsec.so.1
# libdladm.so.1 => /lib/libdladm.so.1
# libmp.so.2 => /lib/libmp.so.2
# libmd.so.1 => /lib/libmd.so.1
# libscf.so.1 => /lib/libscf.so.1
# libavl.so.1 => /lib/libavl.so.1
# librcm.so.1 => /lib/librcm.so.1
# libkstat.so.1 => /lib/libkstat.so.1
# libuutil.so.1 => /lib/libuutil.so.1
# libm.so.2 => /lib/libm.so.2
#
# Maximum number of file descriptors = unlimited file descriptors
#
# Effective Real Effective Real
# User User Group Group
#
# root root root root
#
# Current Working Directory: /
#
# elfsign: verification of /sbin/dhcpagent passed.
#
# -----------------------------------------------------
#
#
for i in `ls /proc`
do
openport=`pfiles $i 2> /dev/null |grep "port:"`
if [ ! -z "$openport" ]; then
echo "Process ID #: $i"
echo ""
echo "Ports used: \n $openport"
echo ""
commandline=`/usr/ucb/ps awwx $i | awk '{print $5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25}'`
echo "$commandline"
echo ""
commandline2=`pargs -l $i`
echo "Command Line #2: $commandline2"
echo ""
eco=`pargs -e $i`
echo "Environment Variables: $eco"
echo ""
deps=`pldd $i`
echo "Libraries used: $deps"
echo ""
filedescriptors=`pfiles $i | grep rlimit | awk '{print $3,$4,$5}'`
echo "Maximum number of file descriptors = $filedescriptors"
echo ""
eu=`ps -o user -p $i`
ru=`ps -o ruser -p $i`
eg=`ps -o group -p $i`
rg=`ps -o rgroup -p $i`
effectiveuser=`echo $eu | awk '{print $2}'`
realuser=`echo $ru | awk '{print $2}'`
effectivegroup=`echo $eg | awk '{print $2}'`
realgroup=`echo $rg | awk '{print $2}'`
echo "Effective Real Effective Real"
echo "User User Group Group"
echo ""
echo "$effectiveuser \t\t $realuser \t $effectivegroup \t\t $realgroup"
echo ""
current=`pwdx $i | awk '{print $2}'`
echo "Current Working Directory: $current"
echo ""
elves=`pldd $i | awk '{print $2}'`
elfsign verify -e $elves
echo ""
echo "-----------------------------------------------------"
fi
done
##############################################################################
### This script is submitted to BigAdmin by a user of the BigAdmin community.
### Sun Microsystems, Inc. is not responsible for the
### contents or the code enclosed.
###
###
### Copyright Sun Microsystems, Inc. ALL RIGHTS RESERVED
### Use of this software is authorized pursuant to the
### terms of the license found at
### http://www.sun.com/bigadmin/common/berkeley_license.jsp
##############################################################################
runqueue.d
is one second. If a system has many processes that take a couple
of milliseconds to run, sar will not know that they are in the
run queue.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#!/usr/sbin/dtrace -s
#
# Used for automating checks of CPU, memory, I/O, and network TCP performance.
#
# This script is described and explained in the BigAdmin Tech Tip:
# Automating a System Performance Check Using the checkperf Utility
# http://www.sun.com/bigadmin/content/submitted/perf_check.jsp
#
#pragma D option quiet
profile-1000hz
/curthread->t_cpu->cpu_disp->disp_nrunnable/
{
@runq = sum(curthread->t_cpu->cpu_disp->disp_nrunnable);
}
profile:::tick-30sec
{
normalize(@runq, 3000);
printa("%@8d", @runq);
exit(0);
}
##############################################################################
### This script is submitted to BigAdmin by a user of the BigAdmin community.
### Sun Microsystems, Inc. is not responsible for the
### contents or the code enclosed.
###
###
### Copyright Sun Microsystems, Inc. ALL RIGHTS RESERVED
### Use of this software is authorized pursuant to the
### terms of the license found at
### http://www.sun.com/bigadmin/common/berkeley_license.jsp
##############################################################################
Monday, May 11, 2009
grep
# name: checkmessages_net
#
#
MAIL_RECEIVER=jbatu@p1mh01.amkor.com
today=`date|cut -c10-15`
PATH=/usr/bin; export PATH
Host=`hostname`
# Check host01.
# Check today's message above the level of warning.
GREP="grep "\($today\)" /var/adm/messages | egrep "emerg|alert|crit|err|warning" | grep -v "forceload of misc/md\" | grep -v "No proxy found""
# The messages to be ignored
FILTER="| grep -v \"forceload of misc/md\""
FILTER="$FILTER | grep -v \"No proxy found\""
GREP="$GREP$FILTER"
if eval "$GREP" > /tmp/seriousmessages.txt
then
mailx -s "$Host Serious Message" $MAIL_RECEIVER < /tmp/seriousmessages.txt
fi
Thursday, May 7, 2009
disk mirror
BOOT_DISK=c1t0d0
MIRROR_DISK=c1t1d0
METAINIT=/usr/sbin/metainit
METAROOT=/usr/sbin/metaroot
# Slice 0 (root slice)
$METAINIT -f d10 1 1 ${BOOT_DISK}s0
$METAINIT -f d20 1 1 ${MIRROR_DISK}s0
$METAINIT d30 -m d10
$METAROOT d30
# Slice 1
$METAINIT -f d11 1 1 ${BOOT_DISK}s1
$METAINIT -f d21 1 1 ${MIRROR_DISK}s1
$METAINIT d31 -m d11
# Slice 2 will not be mirrored (represents entire disk)
# Slice 3 is initially commented out; slice containing DiskSuite
# database replicas in our environment
#$METAINIT -f d13 1 1 ${BOOT_DISK}s3
#$METAINIT -f d23 1 1 ${MIRROR_DISK}s3
#$METAINIT d33 -m d13
# Slice 4
$METAINIT -f d14 1 1 ${BOOT_DISK}s4
$METAINIT -f d24 1 1 ${MIRROR_DISK}s4
$METAINIT d34 -m d14
# Slice 5
$METAINIT -f d15 1 1 ${BOOT_DISK}s5
$METAINIT -f d25 1 1 ${MIRROR_DISK}s5
$METAINIT d35 -m d15
# Slice 6
$METAINIT -f d16 1 1 ${BOOT_DISK}s6
$METAINIT -f d26 1 1 ${MIRROR_DISK}s6
$METAINIT d36 -m d16
# Slice 7
#$METAINIT -f d17 1 1 ${BOOT_DISK}s7
#$METAINIT -f d27 1 1 ${MIRROR_DISK}s7
#$METAINIT d37 -m d17
# Make the mirror disk bootable
installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/${MIRROR_DISK}s0
Wednesday, May 6, 2009
create ISO on solaris
#!/bin/sh
#
# =head1 NAME
#
# mkiso - construct an ISO9660 CD image for burning a data CD
#
# =head1 SYNOPSIS
#
# mkiso [-l label] [-o image.iso] directory
#
# =head1 DESCRIPTION
#
# I
# for constructing an ISO9660 image for a CDROM.
# It is partner to mkcd(1cs), a program to burn data CDs.
# I
#
# =head1 OPTIONS
#
# =over 4
#
# =item B<-l> I
Tuesday, May 5, 2009
Symantec Storage Foundation
# /opt/VRTS/bin/vxsvcctrl status -> view VEA service status
# /opt/VRTS/bin/vxsvcctrl stop -> stop VEA service
# /opt/VRTS/bin/vxsvcctrl start -> start VEA service
# vxdg list
# vxdg list diskgroup
# vxdg free
# vxdiskadd c1t0d0
# vxdg init diskgroup [cds=on|off] diskname=devicename
# vxdg init mktdg mktdg01=c1t0d0s2
# vxdiskadd c1t1d0 -> Adding a disk to a disk group
Removing a disk from a disk group
# vxdg [-g diskgroup] rmdisk diskname
# vxdg -g mydg rmdisk mydg02
Once the disk has been removed from its disk group, you can (optionally)
remove it from VxVM control completely, as follows:
# vxdiskunsetup devicename
For example, to remove c1t0d0s2 from VxVM control, use these commands:
# vxdiskunsetup c1t0d0s2
Deporting a disk group
-> Deporting a disk group disables access to a disk group that is currently enabled (imported) by the system.
Deport a disk group if you intend to move the disks in a disk group to another system. Also, deport a disk
group if you want to use all of the disks remaining in a disk group for a new purpose
# vxvol -g diskgroup stopall
# vxdg deport diskgroup
# vxdisk -s list
-> Select menu item 8 (Enable access to (import) a disk group) from the vxdiskadm main menu.
# vxdg import diskgroup
# vxdisk listtag
Renaming a disk group
# vxdg [-t] -n newdg import diskgroup
# vxdg -t -n mytempdg import mydg
To rename a disk group during deport, use the following command:
# vxdg [-h hostname] -n newdg deport diskgroup
# vxdg -h jingo -n myexdg deport mydg
Moving disks between disk groups
# vxdg -g salesdg rmdisk salesdg04
# vxdg -g mktdg adddisk mktdg02=c0t3d0
Handling errors when importing disks
To clear locks on a specific set of devices, use the following command:
# vxdisk clearimport devicename ...
To clear the locks during import, use the following command:
# vxdg -C import diskgroup
If some of the disks in the disk group have failed, you can force the disk group to be
imported by specifying the -f option to the vxdgimport command:
# vxdg -f import diskgroup
set manual dns thru command line
Sunday, May 3, 2009
Installing or Upgrading the iSCSI Software Initiator
Installing or Upgrading the iSCSI Software Initiator
Use this procedure to install or upgrade the Microsoft iSCSI Software Initiator.
Steps
- Stop the SnapDrive for Windows service.
- If you are running SnapManager for Microsoft Exchange, stop any application-specific services that have iSCSI dependencies (for example, Microsoft Exchange System Attendant).
- Perform the following steps to install the new iSCSI Software Initiator components.
If you are installing or upgrading a cluster, first close MMC, then install the new iSCSI Software Initiator components on each node, starting with the nodes that do not own the cluster resources and rotating which system is the owner until all nodes in the cluster have the new iSCSI initiator installed.
- Download the Microsoft iSCSI Software Initiator from the Microsoft site.
- Run the install package executable and proceed through the Installation wizard.
- If you are going to use the iSCSI Initiator to create and manage LUNs, make sure that the Initiator Service and Software Initiator check boxes are selected on the Installation Options screen. Note: The Virtual Port Driver option is unavailable because it is automatically installed during the Microsoft iSCSI Initiator installation and upgrade.
- If you want to use MPIO, check the Microsoft MPIO Multipathing Support for iSCSI check box on the Installation Options screen.
- Follow the prompts to complete the installation.
For more information about installing and configuring the Microsoft iSCSI Initiator, see the iSCSI Microsoft Windows Initiator Support Kit Setup Guide document on the NOW site.
- Restart the SnapDrive service on the stand-alone host or on each node in the cluster
Establishing an iSCSI session to a target
Establishing an iSCSI session to a target
Before creating a LUN, you need to have an iSCSI session to the target on which you will manage the LUN.
Before You Begin
Verify that the iSCSI service is started.
Steps
- Perform the following actions to launch the Create iSCSI Session wizard:
- In the left MMC pane, select the instance of SnapDrive you want to manage.
- Select iSCSI Management.
- From the menu choices at the top of MMC, navigate to , Action > Establish Session.
- In the ISCSI Session wizard, click Next.
The Provide Storage System Identification panel is displayed.
- In the Provide Storage System Identification panel, enter the storage system name or IP address of the storage system management port you want to establish the iSCSI session with, and then click Next. Note: The IP address you enter is the storage system’s management port IP address, not the target portal IP address to which SnapDrive will establish an iSCSI session. You will select the IP address for establishing an iSCSI session in Step 5.
The Provide iSCSI HBA panel is displayed.
- In the upper pane of the Provide iSCSI HBA panel, click the radio button next to an available iSCSI HBA to specifiy the initiator portal you want to use.
- In the lower pane of the Provide iSCSI HBA panel, perform the following actions:
- Select the target portal to which SnapDrive will establish the iSCSI session by clicking the radio button next to the IP address of the target portal you want to use.
- If your target requires authentication, select Use CHAP, and then type the user name and password that iSCSI will use to authenticate the initiator to the target. For more information about CHAP, see “Understanding CHAP authentication”.
- Click Next.
The Completing the iSCSI Session Wizard panel is displayed.
- In the Completing the iSCSI Session Wizard, perform the following actions:
- Review the information to make sure it is accurate.
- If the information is not accurate, use Back to go back to previous panels of the wizard to modify information.
- Click Finish.
Creating a shared LUN
Creating a shared LUN
You can use SnapDrive to create FCP- or iSCSI-accessed LUNs that are shared between clustered Windows servers.
Before You Begin
Considerations
Keep the following consideration in mind when creating a LUN:
Steps
- Perform the following actions to launch the Create Disk wizard:
- Select the SnapDrive instance for which you want to create a disk.
- Select Disks.
- From the menu choices at the top of MMC, navigate to Action > Create Disk .
The Create Disk Wizard is launched.
- In the Create Disk Wizard, click Next.
The Provide Storage System Name, LUN Path and Name panel is displayed.
- In the Provide a Storage System Name, LUN Path and Name panel, perform the following actions:
- In the “Storage System Name” field, type the storage system name where the LUN will be created or select an existing storage system using the pull-down menu.
- In the “LUN Path” field, type the LUN path or select the path on the storage system you added in Step a.
- In the "LUN Name" field, enter a name for the LUN and click Next.
The Select a LUN Type panel is displayed.
- In the Select a LUN Type panel, select Shared, and then click Next.
- In the “Information About the Microsoft Cluster Services System” panel, verify that you want the disk to be shared by the nodes listed, and then click Next.
The Select LUN Properties panel is displayed.
- In the Select LUN Properties panel, either select a drive letter from the list of available drive letters or enter a volume mount point for the LUN you are creating. When you create a volume mount point, enter the drive path that the mounted drive will use: for example, G:\mount_drive1\. Note: The root of the volume mount point must be owned by the node on which you are creating the new disk.Note: You can create cascading volume mount points (one mount point mounted on another mount point); however, in the case of a cascading mount point created on an MSCS shared disk, you might receive a system event warning indicating that disk dependencies might not be correctly set. This is not the case, however, as SnapDrive will create the dependencies and the mounted disks will function as expected.
- While still in the Select LUN Properties panel, complete the following actions:
- Click Limit or Do not limit for the option labeled “Do you want to limit the maximum disk size to accommodate at least one snapshot?”
If you select Limit, the disk size limits displayed are accurate only when they first appear on the Select LUN Properties panel. When this option is selected, the following actions might interfere with the creation of at least one Snapshot copy:
- Select a LUN size. The size must fall within the minimum and maximum values displayed in the panel.
- Click Next.
If the settings on the storage system volume or qtree on which you are creating the LUN do not allow SnapDrive to proceed with the create operation, the Important Properties of the Storage System Volume panel is displayed, as described in Step 8. Otherwise, Step 8 is skipped.
- Click Limit or Do not limit for the option labeled “Do you want to limit the maximum disk size to accommodate at least one snapshot?”
- The Important Properties of the Storage System Volume panel displays the settings that will be used for the volume or qtree you specified in Step 4 of this procedure.
SnapDrive requires the storage system volume containing LUNs to have the following properties:
- create_ucode = on
- convert_ucode = on
- snapshot_schedule = off
- Click Next.
- In the Select Initiators panel, perform the following actions:
- Double-click the cluster group name to display the hosts that belong to the cluster.
- Click the name of a host to select it.
The available initiators for that host are displayed in the Initiator List in the lower half of the pane.
- In the Initiator List pane, select an initiator for the LUN you are creating.
If you select an iSCSI initiator, and an iSCSI connection to the storage system on which you are creating the LUN does not exist, SnapDrive launches the Create iSCSI Session wizard, and you are prompted to select a target portal and initiator. Also, if your target requires authentication of hosts that connect to it, you can type that information here. After you click OK, the iSCSI connection from the Windows host to the storage system is established, even if you do not complete the Create Disk wizard.
If you have MPIO installed and you are using FCP, you have the option to select several FCP initiators.
- Repeat Step 10 and Step 11 for all hosts, and then click Next. Note: The Next button remains unavailable until initiators for all hosts of a cluster have been selected.
The Select Initiator Group management panel is displayed.
- In the Select Initiator Group management panel, specify whether you will use automatic or manual igroup management. If you select automatic igroup management, SnapDrive uses existing igroups or, when necessary, creates new igroups for the initiators you specified in Step 10 through Step 12. If you select manual igroup management, you manually choose existing igroups or create new ones as needed.
If you specify... Then... Automatic igroup management Click Next.
You are done with igroup management.
Manual igroup management Click Next, and then perform the following actions: - In the Select igroups panel, select from the list the igroups to which you want the new LUN to belong. Repeat this action for each node in the cluster.Note: A LUN can be mapped to an initiator only once.
OR
Click Manage igroups and for each new igroup you want to create, type a name in the igroup Name text box, select initiators, click Create, and then click Finish to return to the Select igroups panel.
- Click Next.
Note: The Next button will remain unavailable until the collection of selected igroups contains all the initiators you selected in Step 11. - In the Select igroups panel, select from the list the igroups to which you want the new LUN to belong. Repeat this action for each node in the cluster.
- In the Specify Microsoft Cluster Services Group panel, perform the following actions.
- From the Group drop-down list, select one of the cluster groups owned by this node to which the newly created LUN will belong.
OR
Select Create a New Cluster Group to create a new cluster group and then put the newly created LUN in that group.
Note: When selecting a cluster group for your LUNs, choose the cluster group your application will use. If you are creating a volume mount point, the cluster group is already selected. This is because the cluster group owns your root volume physical disk cluster resources. It is recommended that you create new shared LUNs outside of the cluster group. - Click Next.
The Completing the Create Disk Wizard panel is displayed.
- From the Group drop-down list, select one of the cluster groups owned by this node to which the newly created LUN will belong.
- In the Completing the Create Disk Wizard panel, perform the following actions:
Creating a storage system volume
Creating a storage system volume
You can create a volume on the storage system using the SnapDrive Storage System Management snap-in or FilerView on the storage system.
Considerations
These steps describe how to create a volume using the SnapDrive Storage System Management snap-in.
Steps
- In the left MMC pane, navigate to SnapDrive > Storage System Management.
- Click on the storage system where you want to create a volume.
- Provide login credentials to the storage system when prompted.
A storage system FilerView session will be displayed in the main MMC pane.
- Navigate to Volumes > Add.
- Follow the instructions in the FilerView wizard to add either a traditional or flexible volume.