Skip to content

Upgrade the solution

Planning and Preparation

  1. Backup of Modified QuickSight Analysis and Dashboard: The solution upgrade may involve modifying the out-of-the-box analysis and dashboard. In this case, you can back them up following this documentation.
  2. Data Processing Interval (only applied to upgrade from v1.0.x): The pipeline upgrade will take about 20 minutes. Make sure no data processing job is running while upgrading the existing pipeline. You can update the existing pipeline to increase the interval and view whether there are running jobs of the EMR Serverless application in the console.

Upgrade Process

Upgrade web console stack

  1. Log in to AWS CloudFormation console, select your existing web console stack, and choose Update.
  2. Select Replace current template.
  3. Under Specify template:

    • Select Amazon S3 URL.
    • Refer to the table below to find the link for your deployment type.
    • Paste the link in the Amazon S3 URL box.
    • Choose Next again.
    Template Description
    Use Cognito for authentication Deploy as public service in AWS regions
    Use Cognito for authentication with custom domain Deploy as public service with custom domain in AWS regions
    Use OIDC for authentication Deploy as public service in AWS regions
    Use OIDC for authentication with custom domain Deploy as public service with custom domain in AWS regions
    Use OIDC for authentication within VPC Deploy as private service within VPC in AWS regions
    Use OIDC for authentication with custom domain in AWS China Deploy as public service with custom domain in AWS China regions
    Use OIDC for authentication within VPC in AWS China Deploy as private service within VPC in AWS China regions
  4. Under Parameters, review the parameters for the template and modify them as necessary. Refer to Deployment for details about the parameters.

  5. Choose Next.
  6. On the Configure stack options page, choose Next.
  7. On the Review page, review and confirm the settings. Be sure to check the box acknowledging that the template might create (IAM) resources.
  8. Choose View change set and verify the changes.
  9. Choose Execute change set to deploy the stack.

You can view the status of the stack in the AWS CloudFormation console in the Status column. You should receive an UPDATE_COMPLETE status after a few minutes.

Upgrade the pipeline of project

Important

If you encounter any issues during the upgrade process, refer to Troubleshooting for more information.

  1. Log in to the web console of the solution.
  2. Go to Projects, and choose the project to be upgraded.
  3. Click on project id or View Details button, which will direct to the pipeline detail page.
  4. In the project details page, click on the Upgrade button
  5. You will be prompted to confirm the upgrade action.
  6. Click on Confirm, the pipeline will be in Updating status.

You can view the status of the pipeline in the solution console in the Status column. After a few minutes, you can receive an Active status.

Post-Upgrade Actions

Migrate the existing data after upgrading from 1.0.x

When you upgraded the pipeline from v1.0.x, you need to perform the below actions to migrate data from old table ods_events to new tables event, event_parameter, user, and item in the Redshift:

  1. Open Redshift query editor v2. You can refer to AWS doc Working with query editor v2 to log in and query data using Redshift query editor v2.

    Note

    You must use the admin user or the user with schema (known as the app ID) ownership permission.

  2. Select the Serverless workgroup or provisioned cluster, <project-id>-><app-id>->Tables, and make sure tables for the appId are listed there.

  3. Create a new SQL Editor.

  4. Execute below SQL in editor.

    -- please replace `<app-id>` with your actual app id
    CALL "<app-id>".sp_migrate_ods_events_1_0_to_1_1();
    
  5. Wait for the SQL to complete. The execution time depends on the volume of data in table ods_events.

  6. Execute the below SQL to check the stored procedure execution log; make sure there are no errors there.

    -- please replace `<app-id>` with your actual app id
    SELECT * FROM  "<app-id>"."clickstream_log" where log_name = 'sp_migrate_ods_events' order by log_date desc;
    
  7. If you don't have other applications using the legacy tables and views, you could run the SQLs below to clean the legacy views and tables to save the storage of Redshift.

    -- please replace `<app-id>` with your actual app id
    DROP TABLE "<app-id>".dim_users CASCADE;
    DROP TABLE "<app-id>".ods_events CASCADE;
    
    DROP PROCEDURE  "<app-id>".sp_clear_expired_events(retention_range_days integer);
    DROP PROCEDURE  "<app-id>".sp_upsert_users();
    DROP PROCEDURE  "<app-id>".sp_migrate_ods_events_1_0_to_1_1();