Skip to content

Resource Limits

To keep any one user from monopolizing the system, we impose limits on the number of nodes, and therefore cores and GPUs that each user can use at one time. You can find the resource allocations for each node type on the Systems and Software page.

When your account is first created you will have a small startup allocation. Upon completing and earning a certificate for the Practical HPC course (requires a grade of 70% or better on the graded assignments), you can update your resource allocation to the standard allocation via the User Profile page on the Web Portal. In order to update your allocation, you will need the certificate ID number located at the bottom of the page of your certificate of course completion. Copy and paste the certificate ID number to the text box in the “User Resource Limits” section. Complete the update by clicking “Submit”. A successful update will display the message “Certificate verified. Please allow 5 to 10 minutes for your SLURM limits to revert to the standard default limits.” Please wait 5-10 minutes before refreshing the page. If you run into trouble with this process send an email to supercloud@mit.edu. Resource allocations are listed on the User Profile page and on the Systems and Software page.

Requesting an Increase

Limiting each user to a fixed amount of resources prevents any one user from monopolizing the entire system and allows other users to run jobs. However, most users can get significant speedup by using less than their allocation.

When using more CPUs we encourage you to benchmark your applications to determine that more CPUs results in an overall increase in performance. For example, you might time the rate at which your application makes progress for different numbers of CPUs: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512. This performance data can help you choose the optimal number of CPUs for your applications.

If you are facing a deadline, are a Lincoln collaborator and need additional processing power you may request more processors by sending email to supercloud@mit.edu.

Please include:

  • Which nodes you are requesting
  • How many additional node you need
  • The length of time for which you need them
  • Why you are asking for more resources
  • How you are launching your jobs
  • Any other supporting information showing your workload will scale well to the requested allocation

Before requesting, check that resources are available with LLfree, and review the list of Best Practices to verify that you are following best practices. Best practices become more important as you scale up. If you are asking for GPU nodes, please first read through and try our tips on optimizing your GPU jobs.

The SuperCloud Team will evaluate the current usage to determine how best to accommodate all user requests.

Memory allocation

Please see the Submitting Jobs page for instructions on how to determine your job's memory requirements, and how to request additional resources if your job requires a lot of memory.