FAQs

The information under HPC Basics combined with the cluster specific information should help you get started on using Vikram-100. This FAQ answers a number of generic questions related to the HPC.

This FAQ is split into sections for storage, software, job status and FAQs about Garuda Grid.

How to use grid/parallel mathematica on Vikram-100 cluster? How do I get an HPC account?
  • Fill out this form and submit it to Computer Centre.
How do I acknowledge the Vikram-100 HPC?
  • Vikram-100 is dedicated to the research community in PRL. The continued exposure of PRL is dependent upon its demonstrable value, part of which may include published work involved in the use of our systems.

  • Publications resulting from work done on Vikram-100 should include a credit similar to:
    "The computations were performed on the HPC resources at the Physical Research Laboratory (PRL)."
  • PRL requests that a copy of any publication (preprint or reprint) resulting from research done on PRL Vikram-100 HPC system should be uploaded under Recent Publications through your login. Kindly do not forget to select "Yes" under caption - Acknowledge HPC (Vikram-100).
How do I use Intel VTune Amplifier on Vikram-100 HPC?
  • Kindly add the module - "module add module add intel/VTune_Ampl_XE_2015" and launch the Intel VTune Amplifier using command - amplxe-gui or amplxe-cl
How do I use Intel Advisor on Vikram-100 HPC?
  • Kindly add the module - "module add intel/Adviser_XE_2015" and launch the Intel Advisor using command - advixe-gui
How do I use Intel Debugger on Vikram-100 HPC?
  • Kindly add the module - "module add intel/Composer-XE-2013" and run the command - idb (for GUI) and idbc (for command line) access.
I requested 24 cores but is my job REALLY using all 24 cores?
  • Good question! Just because you requested 24 cores and the scheduler allocated 24 cores to your job does NOT mean that your program is using all 24 cores. The only way to know how your job is performing is to go to the node (ssh) and run htop or top. Ssh-ing to a node is strictly prohibited because you can offset the load on the node. This is the ONLY exception and it is to be done for no more than 5 minutes at a time.

Can I ssh to a compute node?

  • No, unless you are checking on the status of your job (see above) and only for a few minutes (like 5 minutes max). It is a common practice for new users (who don't understand how to run jobs) to simply ssh to a compute node and run their program there. This is a strictly prohibited because they are circumventing the whole purpose of having a job scheduler.
How do I know what cores are available to run with RIGHT NOW?
  • Run vikram-100-stat at the command prompt.
Can I access the cluster from outside PRL?
  • Access of Vikram-100 HPC is currently restricted to PRL LAN only. However, you can opt for an VPN account and establish secure tunnel to PRL, then connect to Vikram-100 through one of your machines running in PRL. To know more, get in touch with HPC admins ([email protected])

How do I solve the 'Buffers must not be aliased' error?
  • In older version of MPI (Intel MPI - Version 4.0), buffer aliasing was supported. However, as per MPI 2.2 standard, buffer aliasing is prohibited and as a result, any code running on newer version of Intel MPI (version 5 and above) will throw up the 'Buffers must not be aliased' error. Kindly modify your code accordingly as per MPI standards to have maximum compatibility across MPI implementations and to make your code future proof. For the time being, you may set 'export I_MPI_COMPATIBILITY=4' in your job script. Reference: https://software.intel.com/en-us/forums/topic/392347



Contribution:

HPC users are strongly encouraged to contribute Tutorials / Articles / How-To's / Feedbacks / FAQs / Suggestions related to Vikram-100 to this web site. To do so, kindly send email to [email protected]. Your articles (if you prefer) will appear with your name and date of contribution.

Thank You:

Thank you for taking the time to read this web site. We hope that this will help you in getting started with running jobs on the Vikram-100 HPC cluster. Please note that both the Vikram-100 and this web-site is relatively new. There may be some hiccups. If you face one, kindly let us know. If you are still having issues, we request you to first try asking one of your colleagues for help who is familiar with HPC, then try the HPC admin staff at: