Skip to content

Master Test Plan

Document Master Test Plan
Author: Reima Parviainen
Version: 0.1
Date: 7.6.2023

General Information

This master test plan outlines the testing approach, goals, and schedule for the Tukko project. The testing activities will be conducted by TESTribe, who will utilize GitLab and Open Project Framework for test documentation. Multiple testing tools, including WAVE, Selenium, Robot Framework, Playwright, and manual/exploratory testing, will be employed to ensure comprehensive testing coverage.

About Test Planning

The test planning phase involves defining the scope, objectives, and strategies for testing the system. It includes identifying the features to be tested and outlining the testing environments, resources, and responsibilities.

About the Test Target / System Under Test

The test target is the Tukko project, which aims to deliver a robust and reliable system. The system under test includes various features and functionalities that need to be thoroughly tested to ensure their proper functioning.

Test Goals and Primary Needs

The primary goals of the testing process are to validate the system's functionality, performance, security, and availability. The testing activities aim to identify and address any defects, usability issues, and performance bottlenecks. Additionally, the testing process should ensure compliance with the specified requirements and user expectations.

Schedule

The testing activities will be conducted in accordance with the project plan. Please refer to the project plan for the detailed schedule.

Release Plan

The testing process will align with the release plan for the Tukko project. For more information, please refer to the release plan.

Test Cases

Test Cases will be a part of Tukkos documentation. We use a template for all our Test Cases which follow this procedure:

The test cases follow a specific template that includes the following sections:

  1. Test Case ID: A unique identifier for the test case, following the format "TCXXX-YYY".

  2. Author: The name of the person who created the test case.

  3. Date of creation: The date when the test case was created.

  4. Class: The classification of the test case, indicating whether it is functional, non-functional, or an acceptance test.

  5. Type: The type of test, specifying whether it is a compliance, correction, evolution, regression, integration, end-to-end, accessibility, performance, security, or backend test.

  6. Test Description/Objective: A brief description of the test case, explaining its purpose or objective.

  7. Links to requirements or other sources: References to the feature requirement, use case, and feature associated with the test case.

  8. Test Pre-state: The preconditions or setup required before executing the test case.

  9. Test steps: A table with the sequential steps to be performed during the test, including the action to be taken and the expected result.

  10. To be taken into account during the test: Any additional information or considerations that should be kept in mind while executing the test.

  11. PASS/FAIL Criteria: The criteria to determine whether the test case passes or fails, based on the successful completion of all test steps and the expected results.

Using this documentation for test cases offers several benefits. It ensures a standardized format for consistency and readability. The inclusion of links establishes traceability to requirements and functionality. The structured format encourages comprehensive test coverage and systematic thinking. Collaboration is facilitated through a clear understanding of the test case purpose. The template is reusable and promotes consistency across documentation. The PASS/FAIL criteria enable accurate evaluation and reporting. Overall, it improves efficiency, effectiveness, and quality in the testing process.

Test Results

Our test cases of course produce a lot of test results when our testers run them through. Test results also have template and it follows a protocol just like the test cases.

  1. Test Report Details: - Provide the iteration number and date of the test report. - Specify the feature being tested

  2. Test Objectives: - List out the objectives or goals of each type of test being conducted.

  3. Test Execution Details: - Mention the names of the specific tests or test suites that were executed. - Provide the location or source of these tests or test suites.

  4. Test Cases: - Create a table that includes the following columns:

    • Test Case: Identify each test case using a unique identifier.
    • Description: Describe the purpose or objective of each test case.
    • Status: Indicate whether each test case passed or failed.
    • Notes: Include any additional comments or observations about each test case.
  5. Summary: - Summarize the test results with the following information:

    • Total Test Cases: Specify the total number of test cases executed.
    • Passed: State the number of test cases that passed.
    • Failed: Indicate the number of test cases that failed.
    • Pending: Mention the number of test cases that are still pending.
    • Success Rate: Calculate the success rate as a percentage.
  6. Observations and Notes: - Provide general observations, issues, or notes related to the testing iteration. - Include any relevant information or comments that may be helpful for further analysis.

  7. Recommendations: - Suggest any recommendations for improvements or additional testing, if applicable. - Highlight areas where further attention or investigation may be required.

  8. Next Steps: - Outline the next steps or actions to be taken in the upcoming testing iteration. - Specify any specific tasks or areas that need to be addressed or focused on.

The use of this kind of test result documentation is beneficial for several reasons. It provides a standardized format for documenting test results, ensuring consistency and readability. The documentation offers a comprehensive overview of the test execution details, including test case status, metrics, observations, recommendations, and next steps. This enables stakeholders to evaluate the effectiveness of the testing process and make informed decisions based on the outcomes. The traceability and documentation aspects help establish a record of testing activities and ensure adequate coverage of functionality. Additionally, the documentation promotes collaboration and communication among team members, facilitating discussions, and planning for future testing iterations. Overall, this type of test result documentation enhances the efficiency, effectiveness, and quality of the testing process.

Tested Features

The following table lists the features that are tested:

Feature Priority
Feature 001 - Service Docker Containerizer P1
Feature 002 - GDPR Statements P1
Feature 004 - Leaflet.js for the map P3
Feature 005 - Visualize traffic hotspots P1
Feature 006 - Highlight an area from the map P1
Feature 007 - Filter data P1
Feature 009 - Service domain name (xx.wimmalab.org) P3
Feature 010 - Mobile responsiveness P2
Feature 011 - Customer feedback system P1
Feature 014 - Smart data filtering P3
Feature 015 - Show LAM Station locations P1
Feature 016 - TMS Show real-time data P2
Feature 017 - TMS Show 5min averages of traffic P2
Feature 019 - MongoDB for historical data P3
Feature 022 - Accessibility P1
Feature 023 - Zoom to a marker / selected area P3

Non-Tested Features

The following table lists the features that will be tested after implementation:

Feature Priority
Feature 003 - Service Analytics P3
Feature 008 - Automatized testing P2
Feature 012 - Login system P3
Feature 013 - Email subscription P3
Feature 018 - Digitransit API info P2
Feature 020 - Write unit tests for queries P1
Feature 021 - Filter by city, region, street name P1
Feature 024 - Optimization P2
Feature 025 - Show Traffic Announcements on the map P3
Feature 026 - Show fetched Sweden traffic data on map P3
Feature 027 - Show fetched Norway traffic data on map P3

Testing Environments

The testing will be conducted in various testing environments to ensure comprehensive coverage and compatibility.

Resources and Responsibilities

The allocation of resources and responsibilities for testing is outlined below:

Name Description Company / Entity Task Responsibilities LinkedIn
Reima Parviainen Team Leader IoTitude Lead the project and handle test planning
Justus Hänninen Junior Developer IoTitude Backend, TypeScript, SKILL DB API
Hai Nguyen Junior Developer IoTitude GitLab Pipeline
Ilia Chichkanov Junior Developer IoTitude Backend, SKILL DB API
Olli Kainu Junior Developer IoTitude Frontend
Otto Nordling Junior Developer / Tester IoTitude Testing
Alan Ousi Junior Developer / Tester IoTitude Testing

Testing Levels

The testing process will encompass multiple levels to ensure comprehensive coverage and identify any issues at different stages.

Acceptance Testing

Acceptance testing will be performed to validate that the system meets the specified requirements and is ready for deployment.

System Testing

System testing will focus on validating the system as a whole, including its integration, functionality, and performance.

System Integration Testing

System integration testing will verify the integration and interaction between various components/modules of the system.

Module / Component Testing

Module/component testing will be conducted to validate the individual modules or components of the system.

Testing and Troubleshooting Processes

The testing process will follow a structured approach, including test planning, test case design, test execution, defect tracking, and reporting. Any identified issues or defects will be documented and tracked until resolution.

Chosen Test Strategy

The chosen test strategy will include a combination of automated and manual testing approaches. Automated testing will be utilized for repetitive and regression testing tasks, while manual and exploratory testing will focus on validating the system's usability and user experience.

Test Tools and Software Used

The following testing tools and software will be used:

Functional Testing

  • GitLab / Open Project Framework for test documentation
  • Selenium for web application testing
  • Robot Framework for test automation

Performance Testing

  • WAVE tool for accessibility testing
  • Playwright for cross-browser testing

Security Testing

  • Manual testing techniques for security testing
  • Penetration testing for identifying vulnerabilities

Availability Testing

  • Load testing tools for assessing system performance under heavy loads

Attachments

-