About Hardware Projects Links Status Support and Documentation News
Photos Software Publications Thanks Access to LASC Downloads Contact Administrator




Registration

If you are eligible to use the LASC, you must register first to obtain an account. The Registration Form is available here. After registration, you will be provided with personal username and password.


System Access

To access the cluster, use the LASC hostname IP address: 5.179.5.87. Note that cluster is accessible only via SSH (or Secure SHell) protocol version 2. In UNIX/LINUX environment, you can connect to the cluster using ssh:

    ssh  username@5.179.5.87       or       ssh   -l   username   5.179.5.87
The file transfer between your computer and the cluster can be done using sftp.

The SSH Secure Shell for Microsoft Windows operating systems can be found here. It is free for Non-Commercial users and contains both Secure Shell Client (secure analog of Telnet) and Secure File Transfer Client (secure analog of FTP).


Software Environment

After logging in, you will be at a Red Hat Linux shell prompt. A shell prompt looks like an MS-DOS screen. Users type commands at a shell prompt, the shell interprets these commands, and then the shell tells the operating system what to do. Experienced users can write shell scripts to expand their capabilities even more.

A help on the command use can be obtained by reading the man pages, just type

    man command_name
at a shell prompt.

The default shell for Red Hat Linux is the Bourne Again Shell, or bash. You can learn more about bash by reading the bash man page (type man bash at a shell prompt).

Several often used commands are described below:
    To login to another cluster node, use the rsh command.
    To see the cluster status, use the clrun -a command.
    To change directories, use the cd command.
    Using the ls command, you can display the contents of your current directory.
    You can compress/uncompress files with the compression tools gzip/gunzip, bzip2/bunzip2, or zip/unzip.
    A tar command allows to collect several files and/or directories in one file. This is a good way to create backups and archives.
    To copy files, use the cp command.
    To move files, use the mv command.
    You can create directories with the mkdir command.
    To delete files or directories, use the rm command.
    To close prompt window (exit from the system), use the exit command.


Setting up SSH Environment for MPI use

Before using MPI demanding programs, you must first set up the SSH environment to be able to connect to any cluster node without password. This can be done following these steps:

    1) ssh-keygen -t dsa
    2) cd .ssh
    3) cp id_dsa.pub authorized_keys
    4) cp id_dsa.pub authorized_keys2
    5) ssh mpich* and answer "yes" (here * means a node number from 1 to the last one)


Managing Your Allocation

To run your application in interactive mode, simply type at a shell prompt

    application_name

To start application in background mode, type
    application_name &

If you need that your application will continue to work in background mode after you logout from the system, type
    nohup full_application_path/application_name &

To measure the run time of your application, use the time command (see man time for details).
For example, use the following command to run time consuming applications from your home directory:
    nohup time -p -o time.lst $HOME/application_name &
The file time.lst will contain the information (in seconds) on:
1) elapsed real time,
2) total number of CPU-seconds that the process spent in user mode;
3) total number of CPU-seconds that the process spent in kernel mode.


System Resources Available to Users

When you are logged into the cluster (see System Access above), you can access all cluster nodes via Gigabit Ethernet network (192.168.2.*) using RSH and SSH protocols.

The names and IP addresses of the nodes are (all in lowercase !):
lasc1 192.168.2.1
... ...
lasc89 192.168.2.89
gateway 192.168.2.254

Here gateway means the Firewall used to connect the cluster to the Internet. The gateway is "transparent" for users, that means you cannot logon to the gateway. The node lasc1 is the one you are logged in first. To connect to other nodes, you must use rsh (or ssh) command. For example, use the following command to connect to node lasc2:

    rsh  lasc2

N.B. Always use the above lasc* names to log into the nodes: they are automatically recognized.

After login, you will have an access to your /home directory. The /home directory is located on the RAID-5 disk subsystem at dell and has the capacity of about 6.3 TB. The /home directory is exported to other nodes via 6-link aggregated Gigabit Ethernet channels using NFS.

Besides the /home directory, several other directories are also accessible for all users on all nodes.

There are /scratch/work* directories for use by MPI:

    /scratch/work1
    ...
    /scratch/work89

The last number indicates the node, where the directory resides physically, i.e. the directory /scratch/work1 is located on the lasc1 node. The capacity of /scratch/work1 directory is 120 GB, /scratch/work12 to /scratch/work29 directories - 250 GB, /scratch/work30 to /scratch/work43 directories - 146 GB, /scratch/work50 to /scratch/work64 directories - 240 GB, and /scratch/work70 to /scratch/work89 directories - 300 GB.

Besides, the /scratch/work* directory is "symlinked" on each node to the /scratch/work directory, i.e., for example, on the node lasc2 the /scratch/work directory is equivalent to the /scratch/work2 directory.

Additionally, the /public directory is available on each node, which is accessible to all users. The /public directory is intended for data exchange between users within the node. Please don't use it for MPI or similar applications, since this directory is located on the system hard disk.


At present moment, the total hard disk resources available to all users consist of about 20 TB. There is no quota for disk space use, therefore please respect other users and remove unused data from the cluster hard disks as soon as possible. If for some reason you need more disk space for your personal use and you are ready to invest some money, please contact System Administrator to discuss about details: there are several possibilities ranging from additional SATA disks to Network Attached Storage (NAS) system.


The OpenMPI or MPICH-MPI are available on all nodes. It uses dedicated Gigabit Ethernet network (192.168.1.*) with the following node names and IP addresses (all in lowercase !):
mpich1 192.168.1.1
... ...
mpich89 192.168.1.89
Please use the above names or IP addresses in your MPI "machines" files.


N.B. The node lasc1 uses 32bit RedHat 9.0 operating system, whereas the nodes lasc12-lasc47 use 64bit CentOS (RedHat EL) 4.4 operating system and the nodes lasc50-lasc89 use 64bit CentOS (RedHat EL) 6.6 operating system. Therefore, don't mix the nodes from three groups when running MPI programs !


These pages are maintained by Alexei Kuzmin (a.kuzmin@cfi.lu.lv). Comments and suggestions are welcome.