Gluex VO Support

Richard Jones, University of Connecticut
last updated May 8, 2013

To use Gluex OSG resources, you must complete the following steps.

  1. Administrative procedure:
    1. Understand and agree to the acceptable use policy for this VO.
    2. Obtain a personal grid certificate from the OSG Identity Manager and install it in your unix home directory and in your web browser. Instructions for doing this may be found here.
    3. Become a member of the Gluex VO by filling in this form. When you do this you should already have your personal grid certificate loaded in your browser to verify your identity.

  2. Installation procedure:
    1. Install the OSG client package, following the instructions provided here. You may do ethis either as an individual user (your account only) or as root (supports all users on your system).
    2. Verify that your grid certificate is registered with the Gluex VO by typing the command "voms-proxy-init" in the account where you stored your grid certificate.
    3. Verify that you are authorized to submit jobs and store files to the Gluex grid by typing the command "globusrun -r ce1.phys.uconn.edu -a". If all of your permissions are ok, this command should succeed.

  3. Operating procedure:
    1. Begin each grid session by renewing your proxy user certificate with the command "voms-proxy-init". You can see the lifetime remaining on your proxy at any time with the command "voms-proxy-info". This proxy must be valid any time you want to execute grid commands, but letting your proxy expire does not affect any jobs or files that are in the system already.
    2. Prepare a submit file for your job, in which you specify the executable, the hardware/OS constraints, and any required input/output files that the batch system must deal with. Submit the job to the grid using globus-job-submit-ws command. A tutorial that goes over all of these steps for simple test cases can be found here. The name of the CE host for globus jobs should be ce1.phys.uconn.edu.
    3. Monitor the progress of your job using the command globus-job-status (or condor_q if you used condor_submit to submit the job to the grid universe). If you decide to cancel the job, use the command globus-job-cancel to stop it.
    4. Once the job is completed, fetch the stdout and stderr logs using the command globus-job-get-output-ws, then clean up after yourself with the command globus-job-clean-ws. Any other output data files produced by your job will have been delivered already during job completion, as specified in your submit script.

  4. Client firewall considerations:
    1. In order for the Condor-G client job submission and monitoring package to communicate remotely with the Compute Elements in the OSG infrastructure, certain requirements must be met by the local firewall on the client end. A detailed description of the whys and hows can be found here.
    2. In the simplest case, the client machine should have an internet IP address, i.e. not be behind a NAT firewall. If for some reason the client machine is not reachable directly by name from the internet then the environment variable $GLOBUS_HOSTNAME in the user's environment on the client must be set to point to the hostname (or IP address if the hostname is not registered in the DNS) of the internet gateway that is set up to route traffic on behalf of the client.
    3. The firewall on the client host must be configured to permit outgoing connections to servers on osg sites of interest, and to accept incoming connections from those servers to ports within some predefined range above port 1023. This range is set by the environment variable GLOBUS_TCP_PORT_RANGE for globus toolkit components, and the HIGHPORT and LOWPORT settings in the condor_config file for the Condor-G job submission client tools.