Terrascope
  • Overview
  • Get started
  • Introduction
    • Terrascope Introduction
    • The Copernicus Programme
    • Registration and authentication
  • Data
    • Sentinel Missions
    • Sentinel-1
    • Sentinel-2
    • Sentinel-3
    • Sentinel-5P
    • PROBA-V mission
    • PROBA-V
    • SPOT-VGT mission
    • SPOT-VGT
    • Additional Products
  • APIs
    • catalogue APIs
    • OpenSearch
    • TerraCatalogueClient
    • STAC
    • Product download
    • Streamlined Data Access APIs
    • openEO
    • Additional Web-services
    • CropSAR Service
    • Web Map Service (WMS)
    • Web Map Tile Service (WMTS)
  • Tools
    • Terrascope GUI
    • Terrascope Viewer
    • openEO web editor
    • Virtual Environments
    • Virtual Machine
    • JupyterLab
    • Hadoop Cluster
    • EOplaza
    • Getting started
    • Manage your Organisation
    • Publish a Service
    • Execute a Service
    • Manage a Service
    • Reporting
  • Quotas and Limitations
  • Support
    • Contact
    • Terrascope Forum
    • Terrascope Sample Examples
  • FAQ

Quality Testing

The quality of the Terrascope platform is monitored in different ways: by manual testing, automation testing, and performance testing.

Manual testing is a testing technique in which test cases are executed interactively. The manual testing process starts when the requirements are defined. When the use cases, including the error scenarios, are clear, test cases are written. A test case contains a description, steps to execute, expected result, priority, and a reference to the requirement. When test case writing is finished, a test case list review is done. As soon as the developed code is available, test cases are executed and their status is updated. Errors are reported as bugs. All test cases are repeated in new builds, until a successful status is reached and reported and the requirement is considered to be fulfilled.

Automation testing is a type of testing done by writing test scripts or using an automation testing tool. This is used to automate repetitive tests. After manual test cases are written, a selection of these are converted into automatic test scripts. These are the regression test cases, and they are repeated every time a release is done, even if a requirement has been released a long time ago. Regression test cases check if non-adapted parts of the platform still work correctly.

Performance testing consists of different kinds of testing.

The Terrascope platform will be stress tested and load tested.

  • Stress testing is a testing activity that checks the software robustness by testing beyond the limits of normal operation. It tries to break the platform by putting it under a heavy load. The purpose of this process is to make sure that the platform (fails and) recovers correctly.

  • Load testing determines the platform’s performance under a specific expected load. The purpose is to check how the platform behaves when multiple users access it simultaneously and to ensure smooth functioning of the software under real-life load conditions. Performance testing is achieved by using a specific tool, which will simulate the load and reports the performance of the platform.

Back to top

Copyright 2018 - 2024 VITO NV All Rights reserved

 
  • Terms of Use

  • Privacy declaration

  • Cookie Policy

  • Contact