TestNG in Selenium — My Honest Take After Years of Using It
I’ll be straight with you. When I first heard ‘Just use JUnit, it’s simpler,’ I nodded and went along with it. Three months and a bloated test suite later, I was untangling execution order issues at 11pm on a Thursday. Not fun.
TestNG — Next Generation testing — fixed that. Not because it’s trendy, but because it was clearly designed by someone who had actually lived through the pain of maintaining a large Selenium project. The annotation system alone is worth the switch.
Here’s what genuinely changed day-to-day for our team:
• Lifecycle control became surgical — not just ‘before’ and ‘after’ but before what, exactly
• Smoke vs regression runs stopped requiring separate projects — one XML file, different groups
• Parallel execution dropped our suite runtime from 40 minutes to under 10
The @ symbol is how you talk to TestNG. Every annotation is an instruction — it sits above a method and tells the framework when to call it, under what conditions, and with what data. That’s the mental model. Keep it and the rest clicks into place.
TestNG Annotations — What Each One Actually Does
I’ve seen people paste an annotation cheat-sheet into their project and call it documentation. So let me actually explain what’s happening under the hood, because the ordering matters more than most tutorials admit.
| Annotation | What It Does |
| @Test | Marks a method as a test case — without this, TestNG ignores it |
| @BeforeMethod | Runs before every individual test method in the class |
| @AfterMethod | Runs after every individual test method in the class |
| @BeforeClass | Runs once before the very first test in a class executes |
| @AfterClass | Runs once after every test in the class has finished |
| @BeforeSuite | Executes once at the very start — before any test in the suite |
| @AfterSuite | Executes once at the very end — after all suite tests complete |
The hierarchy runs like this: Suite wraps everything. Class wraps your test methods. Method wraps each individual test. So if your @BeforeSuite driver initialization crashes, nothing runs. If @BeforeClass fails, every @Test in that class gets skipped. Understanding this chain saves you a lot of confusing stack traces.
One thing that bit us early on: @BeforeMethod runs before every single @Test, not just once. If your setup is expensive — spinning up a browser, say — that cost multiplies fast. We moved browser init to @BeforeClass and kept @BeforeMethod for lighter things like navigating to the start URL.
Priority — Because TestNG’s Default Order Will Surprise You
Nobody tells you this upfront: TestNG doesn’t run tests top-to-bottom the way you’d expect. It’s not random, but it’s not source-file order either. For isolated unit tests that’s fine. For end-to-end flows where login must happen before dashboard — it’s a problem.
Fix it with priority. The rule is dead simple: lower number runs first.
@Test(priority = 1)
public void login() {}
@Test(priority = 2)
public void dashboard() {}
@Test(priority = 3)
public void logout() {}
Duplicate priority values? TestNG breaks ties alphabetically by method name. I’ve been burned by that exactly once — two methods both marked priority = 2, one named ‘addItem’ and one named ‘zeroCart’, and they ran in the wrong order on me. Keep priorities unique.
Parallel Execution — This One Feature Alone Justifies the Switch
Our regression suite had 280 tests. Running them sequentially: 38 minutes. After switching on parallel execution in testng.xml with 4 threads: 9 minutes. Same coverage, same assertions, four times the speed. That kind of improvement changes what’s realistic to run on every pull request.
The best part is there’s zero code change involved. You configure it in testng.xml and TestNG handles the threading. The only catch is that your test code needs to be thread-safe — if two tests share static state or a single WebDriver instance, you’ll see race conditions that are incredibly annoying to debug.
testng.xml — The File That Ties Everything Together
New developers on our team always underestimate this file. It looks like boilerplate XML at first glance. It isn’t. It’s where you control which classes run, which methods get skipped, how many threads to use, which groups are active, and whether tests run sequentially or in parallel.
This example launches two test classes simultaneously across 2 threads:
<!DOCTYPE suite SYSTEM “https://testng.org/testng-1.0.dtd”>
<suite name=”ParallelSuite” parallel=”tests” thread-count=”2″>
<test name=”Test1″>
<classes>
<class name=”tests.LoginTest”/>
</classes>
</test>
<test name=”Test2″>
<classes>
<class name=”tests.DashboardTest”/>
</classes>
</test>
</suite>
The parallel attribute accepts three values: tests (runs <test> blocks in parallel), classes (runs classes in parallel), and methods (runs individual test methods in parallel). Start with tests — it’s the most predictable. Methods-level parallelism is powerful but requires genuinely thread-safe code throughout.
Also worth noting: thread-count is a ceiling. If you set it to 8 but only have 3 test classes, you’ll get 3 threads. It won’t waste resources spinning up empties.
Test Grouping — One Codebase, Multiple Run Profiles
Before we added groups, running just our smoke tests meant commenting out half the test classes in testng.xml. Every time. Someone would forget to uncomment something before the nightly build and we’d get incomplete coverage reports. Groups ended all of that.
You tag methods with whatever group names make sense for your project:
@Test(groups = “smoke”)
public void smokeTest() {}
@Test(groups = “regression”)
public void regressionTest() {}
Then your testng.xml specifies which groups to include or exclude for each run. Pre-release: run smoke. Nightly build: run regression. One test file, two completely different execution profiles. No duplication, no commenting things out, no accidents.
Data-Driven Testing with @DataProvider
Here’s a scenario I hit constantly: testing a login form with ten different credential combinations — valid users, invalid passwords, locked accounts, empty fields. The naive approach is ten separate test methods. That’s ten times the code to maintain when the form changes.
@DataProvider solves this properly. Write the test logic once, supply the data separately:
@DataProvider(name = “loginData”)
public Object[][] data() {
return new Object[][] {
{“admin”, “1234”},
{“user”, “abcd”}
};
}
@Test(dataProvider = “loginData”)
public void loginTest(String username, String password) {
System.out.println(username + ” ” + password);
}
TestNG runs the test method once per row. Two rows, two executions. Fifty rows, fifty executions. The reports show each run separately so failures are easy to pinpoint.
We take it further and load the data from an Excel sheet — the @DataProvider method reads the file and returns the 2D array. Testers who can’t write Java can still add test data by editing the spreadsheet. That’s been genuinely useful.
Test Dependencies — When One Test Needs Another to Pass First
Some flows only make sense in sequence. You can’t test the shopping cart if the login failed. You can’t test logout if you never logged in. dependsOnMethods is how TestNG expresses those relationships:
import org.testng.annotations.Test;
public class LoginTest {
@Test
public void openBrowser() {
System.out.println(“Browser Opened”);
}
@Test(dependsOnMethods = “openBrowser”)
public void login() {
System.out.println(“User Logged In”);
}
@Test(dependsOnMethods = “login”)
public void dashboard() {
System.out.println(“Dashboard Loaded”);
}
}
When openBrowser fails, TestNG marks login and dashboard as SKIPPED — not FAILED. That distinction matters. A red FAILED on a test that never even ran is misleading. SKIPPED tells you the root cause is upstream.
That said — don’t overdo it. Deep dependency chains become hard to maintain. If every test in your suite depends on the previous one, a single early failure wipes out your entire results page. We keep dependencies shallow: usually just 2-3 levels deep, and only for tests that genuinely can’t run independently.
Built-In HTML Reports
Run your tests and look for a folder called test-output in your project root. Inside is index.html. Open it in a browser. You’ll see a full breakdown: which tests passed, which failed, which were skipped, how long each took.
It’s not pretty by modern standards — no charts, no trend lines, nothing your stakeholders would want to look at. But for developers it’s solid. During active development I check it constantly. When we need something more polished for sprint reviews, we layer Extent Reports on top, which integrates cleanly with TestNG via listeners.
A Minimal TestNG Class — Start Here if You’re New
Before diving into the advanced stuff, here’s the simplest possible TestNG test. Get this running first. Everything else builds on it.
import org.testng.annotations.Test;
public class Demo_Test {
@Test
public void test_One() {
System.out.println(“Test One Executed”);
}
@Test
public void test_Two() {
System.out.println(“Test Two Executed”);
}
}
Both methods run. No testng.xml needed for this — TestNG discovers the @Test annotations automatically. Right-click the class, run as TestNG Test, done. Add testng.xml only when you need grouping, parallel runs, or multi-class suite configuration.
Assertions in TestNG — Your Tests Need These
A test that doesn’t check anything isn’t a test. It’s just automation. Assertions are the actual verification layer — they’re what makes a test pass or fail based on what the application did.
The logic is binary: condition is true → test passes. Condition is false → test fails. Every assertion you write is asking: ‘did the app do what I expected here?’
Hard vs. Soft Assertions — Know the Difference
This trips people up regularly and the confusion leads to real bugs slipping through.
Hard Assertion — Stops on First Failure
Standard assertions are hard by default. The moment one fails, the test stops. Lines after the failing assertion never execute. Use this when a failure makes the rest of the test meaningless — if the login button isn’t present, checking the dashboard is pointless.
Assert.assertEquals(actual, expected);
Soft Assertion — Collects All Failures First
Soft assertions keep running even when one fails. All assertions execute, failures pile up internally, then assertAll() throws everything at once at the end. Use this on forms or pages where you want to see every broken field, not just the first one.
SoftAssert soft = new SoftAssert();
soft.assertEquals(actual, expected);
soft.assertAll();
Forgetting assertAll() is a classic mistake — your soft assertions will silently pass even when they fail. Make it a habit: every SoftAssert instance ends with assertAll().
Assertion Methods Worth Knowing
• assertEquals() — actual and expected must match exactly
• assertTrue() — condition must evaluate to true
• assertFalse() — condition must evaluate to false
• assertNull() — value must be null
• assertNotNull() — value must not be null
My rule of thumb after using these daily for years: use hard assertions for critical navigation checkpoints, soft assertions for validating page content or form field states.
Is TestNG Actually the Right Choice for Your Project?
Honest answer: not always. Here’s when it genuinely earns its place:
• Your suite has grown past ~40 tests and JUnit’s limited lifecycle hooks feel like they’re fighting you
• You need to split runs — smoke before deployment, full regression nightly — without maintaining separate codebases
• CI run times are becoming a genuine bottleneck and parallel execution would fix it
• Your team wants clean test reports without assembling a bunch of third-party integrations
Small project, one developer, 20 tests? JUnit is fine. Don’t over-engineer it. But once you’re managing multiple test classes across a team of developers and your suite takes 30+ minutes to run, TestNG earns back its setup cost quickly.
TestNG Listeners — The Feature Most Tutorials Skip
Listeners are how TestNG lets you hook custom behavior into the test lifecycle without modifying the tests themselves. When I describe them to new team members, I say: ‘Imagine getting a callback every time a test passes, fails, or skips — and being able to do anything in that callback.’
That ‘anything’ is where the real value lives. What our team does with listeners:
• Screenshot captured automatically the millisecond a test fails — attached to the report, no manual intervention
• Each test method gets a log section in our custom report, opened in onTestStart and closed in onTestSuccess or onTestFailure
• Flaky tests get one automatic retry before being marked failed — configured through a retry listener
• Slack notification fires when the failure rate in a build exceeds 10%
ITestListener — The Interface You’ll Actually Use
Implement ITestListener and register it in testng.xml. TestNG calls your methods at every lifecycle event. The ones we override most often: onTestFailure, onTestSuccess, and onTestSkipped.
The screenshot-on-failure hook alone has saved dozens of debugging hours. Instead of re-running a failed test and trying to catch what was on screen, we had a screenshot timestamped to the exact moment of failure, right there in the report.
TestNG vs. Cucumber — Choosing the Right Tool
This comes up in nearly every project kickoff I’ve been involved in. My honest position: they solve different problems and asking ‘which is better’ is the wrong question.
| TestNG | Cucumber |
| Pure Java, annotation-based | Gherkin syntax — Given / When / Then |
| Dev & automation engineer territory | QA, devs, AND business stakeholders can all read it |
| Technical, code-heavy setup | Scenarios read like plain English |
| @Test, @BeforeMethod, @AfterClass… | Step definitions behind the scenes |
| Parallel execution built right in | Needs extra config to run parallel |
| Auto HTML report on every run | Needs Extent/Allure for anything polished |
| Best for large, complex frameworks | Best when business sign-off on tests matters |
If your entire test team is developers and automation engineers, TestNG wins on structure, control, and tooling. The annotation model is natural for Java developers and the parallel execution support is genuinely superior.
If your project needs business stakeholders — product owners, BAs, clients — to read, review, or contribute to test scenarios, Cucumber’s Gherkin format makes that feasible in a way TestNG simply can’t match. Non-technical people can actually understand a Given-When-Then scenario. They can’t read @Test methods.
Some mature teams use both: TestNG for unit and integration layers, Cucumber for acceptance and end-to-end. It’s not an either/or if your project is large enough to justify both.
TestNG Advantages and Disadvantages
Where TestNG Genuinely Delivers
1. Lifecycle annotations cover every level — method, class, suite — so setup/teardown is never guesswork
2. Parallel execution is a config change, not a code change — no refactoring needed
3. Group-based execution means one codebase handles smoke, regression, and sanity runs
4. @DataProvider removes test duplication for data-heavy scenarios completely
5. dependsOnMethods models real user flows and prevents misleading downstream failures
6. Priority-based ordering makes execution sequence explicit and predictable
7. HTML reports generate on every run with zero configuration — useful from day one
8. Maven, Jenkins, and Selenium all integrate with TestNG without friction
9. Parameterization via testng.xml lets you vary environments and inputs from outside the code
10. Strong community, years of Stack Overflow answers, and mature documentation
Where TestNG Will Frustrate You
11. testng.xml feels unnecessarily complex to newcomers — and honestly, some of that complexity is real
12. Annotation overload is real: new team members often spend days just learning which annotation does what
13. Large dependency chains across hundreds of tests become genuinely difficult to maintain or refactor
14. BDD support is basically nonexistent — don’t use TestNG if stakeholders need to read test scenarios
15. Steeper initial overhead than JUnit — for small projects, that overhead is hard to justify
16. Debugging a failed dependency chain is painful; error messages don’t always identify the actual root cause
17. For a 20-test project it’s overkill — simpler tools exist and you should use them
18. Thread safety in parallel runs is your responsibility — shared state causes race conditions that are hard to reproduce
19. Default reports are functional but not presentable — client-facing reporting needs Extent or Allure on top
20. Heavy use of priorities and dependencies slowly makes your suite fragile and resistant to change
Final Thoughts
I’ve used TestNG on projects ranging from a 50-test suite for a small SaaS app to a 2,000-test framework covering a financial platform. It’s held up well in both contexts, though the value scales with project size. Small project, the overhead is noticeable. Large project, you couldn’t run it sanely without it.
The two features I’d miss most if they disappeared tomorrow: parallel execution and test grouping. Those two together changed how we structured our entire CI pipeline. Everything else is a bonus.
If you’re starting fresh with Selenium, just go with TestNG from the beginning. The learning curve is front-loaded but it flattens out fast. Setting it up correctly early costs a week. Migrating a chaotic JUnit suite to TestNG six months later costs much more.
One last thing — and I mean this genuinely — read the testng.xml documentation properly before you think you understand it. It does more than most people realize. The parallel attribute options alone have saved us from some serious re-architecture work.