What I Learned About Testing and Automating a Web Application


By Andréia Ribeiro


When I started my journey into Quality Assurance, I quickly realised that testing a web application goes far beyond clicking buttons and checking if things look right. 

During my mentorship with Julio de Lima, I had the chance to work on a real Bank Web Application project. The goal was to go beyond simple test automation and practice functional and non‑functional testing, including accessibility, performance, responsiveness, and even how to simulate APIs before they are ready.

Understanding What a Web Application Is

Before testing, it is necessary to understand what a web application is and how users interact with it.

Web applications are accessed by many users at the same time, often from different cities, countries, and devices, such as mobile phones, desktops, tablets, and even unusual devices like Kindle browsers. They must work across different screen sizes, browsers, and browser versions. This creates many testing scenarios.

As a QA professional, I realised I need to evaluate not only functional behaviour but also important quality aspects such as:

  • Security

  • Performance under heavy traffic

  • Usability

  • Accessibility for users with different needs

These factors directly affect software quality and should guide my testing strategy like a checklist.

Also, it is important to understand the application architecture: the front‑end (what users see and interact with) and the back‑end (which processes data and returns responses). A tester must verify that both sides communicate correctly, including how the system behaves when errors happen.

In addition, web testing involves understanding some of the technical side of development, frameworks like React or Angular, page components, and technologies like HTML, CSS, and JavaScript. A good tester combines user perspective, system architecture knowledge, and technical awareness to identify risks and defects.

Accessibility Testing

I learned that accessibility is not optional, it's part of quality. It means ensuring that websites and apps can be used by everyone, including people with disabilities.

For example, if a developer does not provide labels for buttons or form fields, a screen reader cannot describe them properly, making the application unusable for visually impaired users.

Tool I used: Axe DevTools, a Chrome extension. After installing it, I opened the browser’s inspect panel, launched Axe DevTools, and ran a full‑page scan to detect accessibility issues automatically.

The tool highlighted problems such as poor colour contrast and missing form labels. For example, if grey text on a white background did not meet WCAG contrast standards, Axe flagged it as an issue. It also identified form elements without labels, which are critical for screen readers.

How I used it: First, I installed the axe DevTools extension from the Chrome Web Store. After installation, I opened the web application, right‑clicked on the page, selected Inspect, and went to the axe DevTools tab. I clicked Scan All of My Page to run an accessibility analysis. Axe DevTools highlighted all issues and their locations, including missing labels, viewport zoom restrictions, and form field errors. I then used the Highlight feature to locate each problematic element directly on the page and fixed them one by one.


Axe can be integrated with Cypress, allowing accessibility checks to become part of an automated test suite.

Front‑end Performance Testing

Performance testing evaluates how efficiently a web application loads and renders in the browser. A simple and effective way I learned to measure this is using Lighthouse, a built‑in tool in Google Chrome.

How I used it: I opened the web application, right‑clicked on the page, selected Inspect, and went to the Lighthouse tab. I chose Navigation mode, selected only Performance, set the device type (Desktop or Mobile), and clicked Analyse Page Load. Lighthouse reloaded the page, measured performance, and generated a report with a score.

The score is based on key metrics:

  • FCP (First Contentful Paint): time until the first visible element appears on screen.

  • LCP (Largest Contentful Paint): time until the largest visible content element is loaded.

  • TBT (Total Blocking Time): time during which the page is unresponsive to user interaction.

  • CLS (Cumulative Layout Shift): measures unexpected layout movement while loading.

  • SI (Speed Index): indicates how quickly the page content becomes visible.



Lighthouse also provides screenshots of loading stages, diagnostics, and recommendations for improvement (e.g., eliminating render‑blocking resources, minifying JavaScript files).

These insights helped me communicate performance findings to developers and suggest concrete improvements.

Responsive Testing

Responsive testing verifies whether a web application adapts correctly to different screen sizes and devices: laptops, tablets, smartphones, and large monitors. I learned that a responsive website should not simply shrink content – it should intelligently reorganise and reposition elements to maintain usability across all screen dimensions.

Tool I used: Google Chrome Developer Tools. I opened the website, right‑clicked and selected Inspect, then clicked the device toolbar icon (the small screen/mobile icon). This opened responsive mode, where I could:

  • Manually resize the screen width and height.

  • Select preset devices (iPhone SE, iPhone 14 Pro Max, iPad, etc.).

  • Identify breakpoints where layout changes (e.g., a centred logo moving to the left on larger screens).

Breakpoints should match design specifications. If elements appear in incorrect positions, it may be a responsiveness defect.

Isolated Testing Using Mocks

APIs are the bridge between front‑end and back‑end. In many teams, the front‑end and back‑end are developed separately, which creates a gap when one side is ready before the other. Waiting for full integration delays testing and defect detection.

Solution I learned: use mocks.
A mock simulates the behaviour of a real API before it is fully developed. I used Mockoon, a tool that allows you to create fake API endpoints locally. I defined routes (e.g., POST /login), configured expected responses (e.g., status 200), and structured JSON responses to match what the real backend would eventually return.

For example, when the front‑end sent a username and password to /login, the mock API, it was configured to always return { data: { token: "..." } }, simulating a successful login. This allowed me to test the front‑end independently, even when the real back‑end was not ready.

I also learned to create more advanced rules: if the username is correct → return 200 OK; if incorrect → return 401 Unauthorised.

Benefits I noticed: earlier testing, reduced dependency on back‑end delivery, isolation of front‑end issues, and earlier defect detection.

Common Web Application Errors

During testing, I learned to intentionally look for these typical problems:


Error

How I tested

Incorrect form validation

Entered invalid data (e.g., wrong email format) and tried to save.

Broken links or buttons

Manually checked links or used a tool like Broken Link Checker.

Broken navigation flow

Tested full user journeys end‑to‑end (e.g., checkout).

Calculation/processing errors

Verified calculations manually against expected results.

Poor or incorrect error messages

Ensured messages were clear, correct, and user‑friendly.

Session management issues

Tested login/logout; verified that after logout, restricted pages were no longer accessible.

Non‑user‑friendly API/system errors

Ensured technical errors (e.g., "Fatal Error") were translated into understandable messages.


Automating Tests with Cypress

Cypress is a popular JavaScript framework for end‑to‑end testing of web applications. It is an all‑in‑one tool that includes a test runner, a way to interact with the browser, and a set of libraries to write automated tests.

How I set up a Cypress project

  1. Initialised a Node.js project: npm init -y

  2. Installed Cypress: npm install --save-dev cypress

  3. Opened Cypress: npx cypress open

  4. Choose E2E testing and a browser (Electron is lightweight and fast).

Test structure

Cypress uses Mocha’s syntax:

javascript
describe("Login", () => {
  beforeEach(() => {
    // runs before each test
  });

  it("should log in with valid credentials", () => {
    cy.visit("/login");
    // ...
  });
});

Headless mode

  • npx cypress run – runs tests in headless mode (no UI), ideal for CI/CD.

  • npx cypress run --headed – runs with a visible browser.

I also learned to add scripts to package.json:

json
"scripts": {
  "test": "cypress run"
}

Then I could simply run npm test.

Fixtures
Environment variables

I learned to store test data in JSON files inside the cypress/fixtures folder. This separates test data from test code.

I learned to use environment variables to run the same tests against different environments (local, QA, production) without changing the test code. I defined variables like BASE_URL in Cypress configuration or via the command line (--env), and accessed them with Cypress.env().

Custom commands 

Custom commands helped me avoid code duplication. For example, logging in with valid credentials happens many times. Instead of repeating the same steps, I created a custom command:

Cypress.Commands.add("loginWithValidCredentials", () => {
  cy.get("#username").type("user");
  cy.get("#password").type("pass");
  cy.get("button[type=submit]").click();
});

Identify candidates for custom commands by looking for repeated code patterns. There are two types:

  • Feature‑specific – used only in one area (e.g., login).

  • Global reusable – used everywhere (e.g., filling combo boxes, validating toast messages).

HTML reports

I used cypress-mochawesome-reporter to generate detailed HTML reports with screenshots and videos of failed tests. This made debugging much easier.

All the automation work described above was applied to a real Bank Web Application (from Júlio de Lima’s repositories  Web project and API project).  

What the project covers:

  • Login tests – valid and invalid credentials.

  • Money transfer tests – successful transfer (≤ $5000) and error handling for amounts > $5000 without a transaction token.

Key features of the project:

  • Custom Cypress commands for login, transfers, toast messages, and combo boxes.

  • Fixtures for test data (credentials.json).

  • Environment variables for flexible configuration.

  • Reports with screenshots and videos using cypress-mochawesome-reporter.

  • Clear project structure with e2e/fixtures/support/commands/.


The project is available on GitHub:

👉 https://github.com/Andreiasribeiro/bank-web-tests

Conclusion

Testing a web application is much more than checking if buttons work. I learned that a good QA professional must also consider accessibility, performance, responsiveness, API behaviour (including mocks), and typical web errors. Automating tests with tools like Cypress not only saves time but also provides reliable feedback for the whole team.

This project was an important milestone in my transition into QA. I share this article as a personal record of what I learned, hoping that others who are also starting out might find inspiration and a practical starting point.


References


Comments