FAQs
The information under HPC Basics combined with the cluster specific information should help you get started on using Vikram-100. This FAQ answers a number of generic questions related to the HPC.
This FAQ is split into sections for storage, software, job status and FAQs about Garuda Grid.How to use grid/parallel mathematica on Vikram-100 cluster?
- Click Here to know the procedure.
- Fill out this form and submit it to Computer Centre.
Vikram-100 is dedicated to the research community in PRL. The continued exposure of PRL is dependent upon its demonstrable value, part of which may include published work involved in the use of our systems.
- Publications resulting from work done on Vikram-100 should include a credit similar to:
"The computations were performed on the HPC resources at the Physical Research Laboratory (PRL)."
- PRL requests that a copy of any publication (preprint or reprint) resulting from research done on PRL Vikram-100 HPC system should be uploaded under Recent Publications through your login. Kindly do not forget to select "Yes" under caption - Acknowledge HPC (Vikram-100).
- Kindly add the module - "module add module add intel/VTune_Ampl_XE_2015" and launch the Intel VTune Amplifier using command - amplxe-gui or amplxe-cl
- Kindly add the module - "module add intel/Adviser_XE_2015" and launch the Intel Advisor using command - advixe-gui
- Kindly add the module - "module add intel/Composer-XE-2013" and run the command - idb (for GUI) and idbc (for command line) access.
Good question! Just because you requested 24 cores and the scheduler allocated 24 cores to your job does NOT mean that your program is using all 24 cores. The only way to know how your job is performing is to go to the node (ssh) and run htop or top. Ssh-ing to a node is strictly prohibited because you can offset the load on the node. This is the ONLY exception and it is to be done for no more than 5 minutes at a time.
- No, unless you are checking on the status of your job (see above) and only for a few minutes (like 5 minutes max). It is a common practice for new users (who don't understand how to run jobs) to simply ssh to a compute node and run their program there. This is a strictly prohibited because they are circumventing the whole purpose of having a job scheduler.
- Run
vikram-100-stat
at the command prompt.
Access of Vikram-100 HPC is currently restricted to PRL LAN only. However, you can opt for an VPN account and establish secure tunnel to PRL, then connect to Vikram-100 through one of your machines running in PRL. To know more, get in touch with HPC admins ([email protected])
In older version of MPI (Intel MPI - Version 4.0), buffer aliasing was supported. However, as per MPI 2.2 standard, buffer aliasing is prohibited and as a result, any code running on newer version of Intel MPI (version 5 and above) will throw up the 'Buffers must not be aliased' error. Kindly modify your code accordingly as per MPI standards to have maximum compatibility across MPI implementations and to make your code future proof. For the time being, you may set 'export I_MPI_COMPATIBILITY=4' in your job script. Reference: https://software.intel.com/en-us/forums/topic/392347