Run Simcenter STAR CCM+ workloads

Stay organized with collections Save and categorize content based on your preferences.

This tutorial shows you how to deploy an HPC cluster and run a Simcenter STAR CCM+ workload. The HPC cluster deployment is done by using Cloud HPC Toolkit and this tutorial assumes that you've already set up Cloud HPC Toolkit in your environment.

Objectives

In this tutorial, you will learn how to complete the following task:

  • Use Cloud HPC Toolkit to create a 4 node cluster that's suitable for running Simcenter STAR-CCM+
  • Install Simcenter STAR-CCM+
  • Run Simcenter STAR-CCM+ on the 4 node cluster

Costs

Before you begin

  • Set up Cloud HPC Toolkit.
  • Get an installation file and input file for Simcenter STAR-CCM+. These files must be obtained directly from Siemens.

    • For the installation file, Version 17.02 or later is recommended. If using the version 17.02 install package it will have the following package name: STAR-CCM+17.02.007_01_linux-x86_64.tar.gz.

    • For the input file, we recommend the automobile simulation workload: lemans_poly_17m.amg.sim.

  • Review the best practices.

Open your CLI

In the Google Cloud console, activate Cloud Shell.

Activate Cloud Shell

At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

Deploy the HPC cluster

The following steps must be completed from the CLI.

  1. Set a default region and zone in which to deploy your compute nodes.

    gcloud config set compute/region REGION
    
    gcloud config set compute/zone ZONE
    

    Replace the following:

    • REGION: your preferred region
    • ZONE: a zone within your preferred region
  2. Define environment variables.

    export DEPLOYMENT_NAME=DEPLOYMENT_NAME
    export GOOGLE_CLOUD_PROJECT=`gcloud config list --format 'value(core.project)'`
    export CLUSTER_REGION=`gcloud config list --format 'value(compute.region)'`
    export CLUSTER_ZONE=`gcloud config list --format 'value(compute.zone)'`
    

    Replace DEPLOYMENT_NAME with a name for your deployment.

  3. Create the HPC deployment folder.

    This tutorial uses the starccm-tutorial.yaml HPC blueprint that is located in the Cloud HPC Toolkit GitHub repository.

    To create a deployment folder from the HPC blueprint, run the following command from Cloud Shell:

    ./ghpc create starccm-tutorial.yaml --vars "deployment_name=${DEPLOYMENT_NAME}" \
        --vars "project_id=${GOOGLE_CLOUD_PROJECT}" \
        --vars "region=${CLUSTER_REGION}" \
        --vars "zone=${CLUSTER_ZONE}"
    

    This command creates the DEPLOYMENT_NAME/ deployment folder, which contains the Terraform needed to deploy your cluster.

    The output is similar to the following:

    Terraform group was successfully created in directoryDEPLOYMENT_NAME/primary
    To deploy, run the following commands:
    terraform -chdir=DEPLOYMENT_NAME/primary init
    terraform -chdir=DEPLOYMENT_NAME/primary validate
    terraform -chdir=DEPLOYMENT_NAME/primary apply
    
  4. Deploy the HPC cluster using Terraform.

    To deploy the HPC cluster, complete the following steps:

    1. Set up the Terraform deployment by running the terraform init command:

      terraform -chdir=DEPLOYMENT_NAME/primary init
    2. Generate a plan that describes the Google Cloud resources that you want to deploy by running the terraform apply command:

      terraform -chdir=DEPLOYMENT_NAME/primary apply
    3. Review the plan and then start the deployment by typing yes and pressing enter.

      This deployment takes about 5 minutes. You will see regular status updates in the terminal.

    If the terraform apply is successful, a message similar to the following is displayed:

    Apply complete! Resources: 26 added, 0 changed, 0 destroyed.
    

    You are now ready to submit jobs to your HPC cluster.

Install Simcenter STAR-CCM+

  1. From the CLI, upload both the installation file for Simcenter STAR-CCM+ and the input file, that you got from Siemens, to Cloud Storage. After you upload the files, the files can then be copied to the VMs in your cluster. Replace YOUR_BUCKET with the name of your Cloud Storage bucket.

    1. Create a Cloud Storage bucket by using the gsutil command.

      gsutil mb gs://YOUR_BUCKET
      
    2. Upload the install file to the Cloud Storage bucket.

      gsutil cp STAR-CCM+17.02.007_01_linux-x86_64.tar.gz gs://YOUR_BUCKET
      
    3. Upload the input file.

      gsutil cp lemans_poly_17m.amg.sim gs://YOUR_BUCKET
      

Connect to a VM

From the CLI, connect to any one of the VMs in your cluster. To connect, run the gcloud compute ssh command.

gcloud compute ssh VM_NAME

Replace VM_NAME with the name of the VM that you want to connect to.

Configure the VM

  1. From the VM instance, complete the following steps:

    1. Setup no-login ssh. To setup no-login ssh, complete the following steps:

      1. Create the .ssh directory and generate a new key.

        mkdir .ssh
        chmod 700 .ssh
        cd .ssh
        ssh-keygen -t rsa -f id_rsa -C `whoami` -b 2048
        
      2. Create the authorized_keys file.

        cat id_rsa.pub >> authorized_keys
        chmod 600 authorized_keys
        
      3. Use ssh-keyscan to get an entry for each of the VMs in your known_hosts file.

        ssh-keyscan -H DEPLOYMENT_NAME-0 >> ~/.ssh/known_hosts
        ssh-keyscan -H DEPLOYMENT_NAME-1 >> ~/.ssh/known_hosts
        ssh-keyscan -H DEPLOYMENT_NAME-2 >> ~/.ssh/known_hosts
        ssh-keyscan -H DEPLOYMENT_NAME-3 >> ~/.ssh/known_hosts
        
    2. Create an hosts.txt file.

      pdsh -N -w DEPLOYMENT_NAME-[0-3] hostname > ~/hosts.txt
      
    3. Copy the install and input files from Cloud Storage to the VM by using gsutil.

      gsutil cp gs://YOUR_BUCKET/* .
    4. After the transfer completes use tar to extract the contents of the files. For example, if the install file is named STAR-CCM+17.02.007_01_linux-x86_64.tar.gz, run the following command:

      tar xvf STAR-CCM+17.02.007_01_linux-x86_64.tar.gz

      In this example, the extracted archive creates a directory calledstar-ccm+_17.02.007.

    5. Assign an environment variable with the install script name. The exact path depends on the version of Simcenter STAR-CCM+.

      export STAR_CCM_INSTALL=starccm+_17.02.007/STAR-CCM+17.02.007_01_linux-x86_64-2.17_gnu9.2.sh
      
    6. Run the installer as follows:

      sudo $STAR_CCM_INSTALL -i silent \
      -DPRODUCTEXCELLENCEPROGRAM=0 \
      -DINSTALLDIR=$HOME/apps \
      -DINSTALLFLEX=false \
      -DADDSYSTEMPATH=true \
      -DNODOC=true
      

      This runs silently and installs the code in your home directory ~/apps.

Prepare to run Simcenter STAR-CCM+

  1. From the VM instance, complete the following steps:

    1. Add the location of the Simcenter STAR-CCM+ executable to the PATH variable.

      export PATH="$HOME/apps/17.02.007/STAR-CCM+17.02.007/star/bin/:$PATH"
      
    2. Set environment variables for the STAR-CCM+ license configurations. The license configuration is dependent on your installation and is provided by Siemens.

      export LICENSE_SERVER=1999@flex.license.com
      export LICENSE_KEYS=YOUR_LICENSE_KEY
      export LICENSE_FLAGS="-licpath ${LICENSE_SERVER} -power -podkey ${LICENSE_KEYS}"
      

      Replace YOUR_LICENSE_KEY with your license key.

    3. Configure the job parameters.

      export NUM_PROCESSES=120
      export NUM_PRE_ITER=60
      export NUM_ITER=20
      export MACHINEFILE="$HOME/hosts.txt"
      export RUNDIR=$HOME
      export INPUT_FILE="$HOME/lemans_poly_17m.amg.sim"
      
    4. Verify that the host.txt file and the input file (lemans_poly_17m.amg.sim) are in your home directory.

Run Simcenter STAR-CCM+ on the 4 node cluster

From the VM instance, run Simcenter STAR-CCM+ as follows:

starccm+ ${LICENSE_FLAGS} -benchmark \
"-preits ${NUM_PRE_ITER} -nits ${NUM_ITER} -nps ${NUM_PROCESSES}" \
-mpi intel -mpiflags "-genv I_MPI_FABRICS shm:ofi -genv FI_PROVIDER tcp \
-genv I_MPI_PIN_PROCESSOR_LIST allcores" -cpubind -pio -np ${NUM_PROCESSES} \
-machinefile ${MACHINEFILE} ${INPUT_FILE}

This generates a long output listing to indicate simulation progress.

Iteration     Continuity     X-momentum     Y-momentum     Z-momentum     Tke            Sdr          Cd             Cl     Vmag (m/s) Total Solver CPU Time (s) Total Solver Elapsed Time (s)
2             1.000000e+00   1.000000e+00   1.000000e+00   1.000000e+00   9.705605e-01   1.000000e+00 3.671660e+00    -5.423592e+00   1.074021e+03    2.970680e+03  3.103627e+01
3             2.331303e-01   1.000000e+00   7.333426e-01   8.118292e-01   1.000000e+00   6.399014e-01 8.217129e-01  -1.449110e+00   4.546574e+02              3.175950e+03                  3.274697e+01
4             1.752827e-01   3.067447e-01   3.516874e-01   3.793376e-01   4.971905e-01   4.102950e-01   3.66

When the job is complete the final line of output is displayed with the CURRENT_DATE replaced by today's date.

Benchmark::Output file name : lemans_poly_17m.amg-Intel_R_Xeon_R_CPU_3-10GHz-CURRENT_DATE.xml

Benchmark::HTML Report : lemans_poly_17m.amg-Intel_R_Xeon_R_CPU_3-10GHz-CURRENT_DATE_194930.html

These files can then be downloaded from your cluster VMs to your local machine by using gcloud.

gcloud compute scp your_login@your-vm-name:~/lemans_poly_17m.amg-Intel_R_Xeon_R_CPU_3-10GHz-CURRENT_DATE_194930.* .

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Destroy the HPC cluster

To delete the terraform cluster, from the CLI, run the following command:

terraform -chdir=DEPLOYMENT_NAME/primary destroy -auto-approve

When complete you should see something like:

Destroy complete! Resources: xx destroyed.

Delete the Cloud Storage bucket

To delete the bucket, use the gsutil rm command with the -r flag:

gsutil rm -r gs://YOUR_BUCKET

Replace YOUR_BUCKET with the name of your Cloud Storage bucket.

If successful, the response looks like the following example:

Removing gs://my-bucket/...

Delete the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.