SciVis Wiki
Advertisement

[

]

This is here temporarily until I find out where to put it on NERSC's website

Franklin[]

We don't have an answer as to whether GatewayPorts can be enabled in sshd_config. The following is the "hard way".

  1. Loging to franklin
  2. module use -a /usr/common/graphics/Modules/modulefiles
  3. module load ParaView/3.8.1
  4. launch an interactive batch job. In the following WW is the job size in processes, PP is the number of processes per node, and HH is the number of hours to run for.
    qsub -I -V -q regular -l mppwidth=WW -l mppnppn=PP -l walltime=HH:00:00
  5. create a tunnel to the rank 0 compute node. This involves a couple of steps. First you will acquire the rank 0 hostname. In the following XXXXX is the hostname. Second you will select a port to use on your local workstation and a port to use on Franklin. In the following YYYYY is the port number on your workstation, and ZZZZZ is the port number on Franklin. You will use these three pieces of information to create the tunnel in ssh's native command syntax.
    $ aprun -n 1 /bin/hostname
    XXXXX
    $ ~C
    -L YYYYY:XXXXX:ZZZZZ
    Forwarding port.
  6. start ParaView server on Franklin. In the following WW is job size that was requested in your qsub command, ZZZZZ is the port number you selected when you established the ssh tunnel.
    aprun -n WW pvserver --use-offscreen-rendering --server-port=ZZZZZ
    Listen on port: ZZZZZ
    Waiting for client...
    Client connected.
    Client connection closed.
  7. start ParaView client on your workstation. The first time you do this download the NERSC server configuration file NERSC-ParaView-Config save it to "~/.config/ParaView/servers.pvsc". Create a new connection (File->Connect), and in the connection dialog choose "Tunnel-Connection". Set the port number to YYYYY and click connect.

Euclid[]

Euclid does not use a batch system and the runs are made directly on the login node. We can fully automate setting up the tunnel, connecting, and starting the server without the sshd_config GatewayPorts=yes option. The following describes setting your desktop up and using our install on Euclid.

To setup for ParaView on Euclid:

  1. Install ParaView on your desktop.
  2. Close any open instance of ParaView.
  3. Download the NERSC server configuration file NERSC-ParaView-Config save it to "~/.config/ParaView/servers.pvsc"

To run ParaView on Euclid:

  1. Start ParaView on your desktop
  2. Open the connection dialog (File->Connect)
  3. Select NERSC--Euclid and Connect. See Figure 1.
  4. Set the run options. You will enter your user name, select the ports to use in the ssh tunnel, and set the number of processes. See Figure 2.
  5. As the connection is made you will see the starting server message box(See Figure 3) and the connection xterm window(See Figure 4). If you do not use key based login you will have to enter your password in the xterm. Do not close the xterm window as doing so will close the ssh tunnel connecting you to Euclid. The xterm window will close on its own when the job is complete. The connection process will take some time so be patient while this completes. When the connection is established the starting server message box will close.
  6. Visualize!! To verify that you are running in parallel create a sphere, crank up the theta and phi resolution, and apply the process id filter. The result should be similar to what is shown in Figure 5 where a run of 8 processes is shown.
  7. When you are finished either disconnect (File->Disconnect) or close ParaView. The server side will shutdown and the xterm will close.

Carver[]

Example usage on Carver:

  1. After login load the ParaView module
    > ssh user@carver.nersc.gov
    > module load ParaView/V.V.V
  2. Launch an interactive batch job. In the following NN is the job size in number of nodes, PP is the number of processes per node, HH is the number of hours to run for. When the job starts you will be logged into the rank 0 copute node. You need it's hostname to set up the tunnel. This is obtained with hostname command. In the following XXXXX is the hostname of the rank 0 compute node.
    > qsub -I -V -q regular -l nodes=NN:ppn=PP -l walltime=HH:00:00
    > hostname
    XXXXX
  3. Step 2 above will leave you logged into the rank 0 compute node. This is where you will run pvserver. Before you start pvserver you need to create a tunnel from you desktop to the rank 0 compute node. This is done in a second terminal. You will select a port to use on your local workstation and a port to use on Carver. In the following XXXXX is the hostanme of the rank 0 compute node, YYYYY is the port number on your workstation, and ZZZZZ is the port number on Carver. You will use these three pieces of information to create the tunnel from a second terminal. From the second terminal:
    % ssh -L YYYYY:XXXXX:ZZZZZ user@carver.nersc.gov
  4. Switch back into the first terminal where you are logged into the rank 0 compute node on Carver. Start the ParaView server. In the following WW is job size that was requested in your qsub command, ZZZZZ is the port number you selected when you established the ssh tunnel.
  5. > mpiexec -np WW pvserver --use-offscreen-rendering --server-port=ZZZZZ
    Listen on port: ZZZZZ
    Waiting for client...
    Client connected.
  6. Start ParaView client on your workstation and connect to Carver. The first time you do this you will need to add a server. Select the client/server option, set the port number to YYYYY, and the command type to manual. From this point on you can use this server to connect over a tunnel created on YYYYY.

Nautilus[]

Kraken[]

Notes[]

  • It is critical to performance that you explicitly use the --use-offscreen-rendering command line option.
  • In most cases you will get better frame rates if you enable zlib compression in the paraview client.
Advertisement