![]() To have your account's default shell changed from bash to one of these, please file a help request and specify the cluster, username, and desired shell in the ticket. The default shell on all the CRC clusters is bash. You should always access the filesystems using environment variables, especially in job scripts.įor information on how to use $PROJECTS, please see our FAQ. NOTE: The physical paths for the above file systems are subject to change. The CRC does not have any kind of policing role, just an advisory one. We welcome any suggestions for offering a higher level of data security as we move forward with shared computing at Rice. We strongly encourage all users to take full advantage of any storage services to prevent accidentally putting data on a system that is not designed for such data by contacting us for advice on best practices in data management. At this point in time shared computing does not offer these services on the clusters or research virtual machine service. Regulatory controls restrictions remains the sole responsibility of the end user. ![]() Research Data Regulatory Controls and Restrictions We will help to provide you the information and assistance you need, but the best place to start is the SPARC Research Compliance website. It is imperative that you are aware of your compliance responsibilities so as not to jeopardize the ability of Rice University to receive federal funding. ![]() The onus of maintaining and preserving research data generated by funded research is placed squarely upon the research faculty, post docs, and graduate students conducting the research. We welcome any suggestions for offering a higher level of data security as we move forward with shared computing at Rice.ĭue to recent changes in NSF, NIH, DOD, and other government granting agencies, Research Data Management has become an important area of growth for Rice and is a critical factor in both conducting and funding research. We strongly encourage all users to take full advantage of any storage services to prevent accidental loss or deletion of critical data by contacting us for advice on best practices in data management. At this point in time shared computing does not offer these services in any automated way. Computeīacking up and archiving data remains the sole responsibility of the end user. This work was supported in part by the Big-Data Private-Cloud Research Cyberinfrastructure MRI-award funded by NSF under grant CNS-1338099 and by Rice University's Center for Research Computing (CRC). Feel free to modify wording for your specific needs but please keep the essential information: An example acknowledgement that can be used follows. If you use NOTS to support your research activities, you are required to acknowledge (in publications, on your project web pages, …) the National Science Foundation grant that was used in part to fund the procurement of this system. The system can support various work loads including single node, parallel, large memory multithreaded, and GPU jobs. There is a 160TB Lustre filesystem attached to the compute nodes via Ethernet. In addition, the Apollo and C6400 chassis are connected with high speed Omni-Path for message passing applications. All the nodes are interconnected with 10 or 25 Gigabit Ethernet network. The system consists of 298 dual socket compute blades housed within HPE s6500, HPE Apollo 2000, and Dell PowerEdge C6400 chassis. NOTS (Night Owls Time-Sharing Service history of name) is a batch scheduled HPC/HTC cluster running on the Rice Big Research Data (BiRD) cloud infrastructure.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |